Feb 01 14:16:57 localhost kernel: Linux version 5.14.0-665.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026
Feb 01 14:16:57 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Feb 01 14:16:57 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 01 14:16:57 localhost kernel: BIOS-provided physical RAM map:
Feb 01 14:16:57 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb 01 14:16:57 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb 01 14:16:57 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb 01 14:16:57 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Feb 01 14:16:57 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Feb 01 14:16:57 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Feb 01 14:16:57 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb 01 14:16:57 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Feb 01 14:16:57 localhost kernel: NX (Execute Disable) protection: active
Feb 01 14:16:57 localhost kernel: APIC: Static calls initialized
Feb 01 14:16:57 localhost kernel: SMBIOS 2.8 present.
Feb 01 14:16:57 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Feb 01 14:16:57 localhost kernel: Hypervisor detected: KVM
Feb 01 14:16:57 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb 01 14:16:57 localhost kernel: kvm-clock: using sched offset of 5624425070 cycles
Feb 01 14:16:57 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb 01 14:16:57 localhost kernel: tsc: Detected 2800.000 MHz processor
Feb 01 14:16:58 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Feb 01 14:16:58 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Feb 01 14:16:58 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Feb 01 14:16:58 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Feb 01 14:16:58 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb 01 14:16:58 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Feb 01 14:16:58 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Feb 01 14:16:58 localhost kernel: Using GB pages for direct mapping
Feb 01 14:16:58 localhost kernel: RAMDISK: [mem 0x2d410000-0x329fffff]
Feb 01 14:16:58 localhost kernel: ACPI: Early table checksum verification disabled
Feb 01 14:16:58 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Feb 01 14:16:58 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 01 14:16:58 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 01 14:16:58 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 01 14:16:58 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Feb 01 14:16:58 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 01 14:16:58 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 01 14:16:58 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Feb 01 14:16:58 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Feb 01 14:16:58 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Feb 01 14:16:58 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Feb 01 14:16:58 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Feb 01 14:16:58 localhost kernel: No NUMA configuration found
Feb 01 14:16:58 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Feb 01 14:16:58 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Feb 01 14:16:58 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Feb 01 14:16:58 localhost kernel: Zone ranges:
Feb 01 14:16:58 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb 01 14:16:58 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Feb 01 14:16:58 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Feb 01 14:16:58 localhost kernel:   Device   empty
Feb 01 14:16:58 localhost kernel: Movable zone start for each node
Feb 01 14:16:58 localhost kernel: Early memory node ranges
Feb 01 14:16:58 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb 01 14:16:58 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Feb 01 14:16:58 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Feb 01 14:16:58 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Feb 01 14:16:58 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb 01 14:16:58 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb 01 14:16:58 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Feb 01 14:16:58 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Feb 01 14:16:58 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb 01 14:16:58 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Feb 01 14:16:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Feb 01 14:16:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb 01 14:16:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb 01 14:16:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb 01 14:16:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb 01 14:16:58 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb 01 14:16:58 localhost kernel: TSC deadline timer available
Feb 01 14:16:58 localhost kernel: CPU topo: Max. logical packages:   8
Feb 01 14:16:58 localhost kernel: CPU topo: Max. logical dies:       8
Feb 01 14:16:58 localhost kernel: CPU topo: Max. dies per package:   1
Feb 01 14:16:58 localhost kernel: CPU topo: Max. threads per core:   1
Feb 01 14:16:58 localhost kernel: CPU topo: Num. cores per package:     1
Feb 01 14:16:58 localhost kernel: CPU topo: Num. threads per package:   1
Feb 01 14:16:58 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Feb 01 14:16:58 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Feb 01 14:16:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Feb 01 14:16:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Feb 01 14:16:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Feb 01 14:16:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Feb 01 14:16:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Feb 01 14:16:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Feb 01 14:16:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Feb 01 14:16:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Feb 01 14:16:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Feb 01 14:16:58 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Feb 01 14:16:58 localhost kernel: Booting paravirtualized kernel on KVM
Feb 01 14:16:58 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb 01 14:16:58 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Feb 01 14:16:58 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Feb 01 14:16:58 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Feb 01 14:16:58 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Feb 01 14:16:58 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Feb 01 14:16:58 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 01 14:16:58 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64", will be passed to user space.
Feb 01 14:16:58 localhost kernel: random: crng init done
Feb 01 14:16:58 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Feb 01 14:16:58 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 01 14:16:58 localhost kernel: Fallback order for Node 0: 0 
Feb 01 14:16:58 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Feb 01 14:16:58 localhost kernel: Policy zone: Normal
Feb 01 14:16:58 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 01 14:16:58 localhost kernel: software IO TLB: area num 8.
Feb 01 14:16:58 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Feb 01 14:16:58 localhost kernel: ftrace: allocating 49438 entries in 194 pages
Feb 01 14:16:58 localhost kernel: ftrace: allocated 194 pages with 3 groups
Feb 01 14:16:58 localhost kernel: Dynamic Preempt: voluntary
Feb 01 14:16:58 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 01 14:16:58 localhost kernel: rcu:         RCU event tracing is enabled.
Feb 01 14:16:58 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Feb 01 14:16:58 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Feb 01 14:16:58 localhost kernel:         Rude variant of Tasks RCU enabled.
Feb 01 14:16:58 localhost kernel:         Tracing variant of Tasks RCU enabled.
Feb 01 14:16:58 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 01 14:16:58 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Feb 01 14:16:58 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 01 14:16:58 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 01 14:16:58 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 01 14:16:58 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Feb 01 14:16:58 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 01 14:16:58 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Feb 01 14:16:58 localhost kernel: Console: colour VGA+ 80x25
Feb 01 14:16:58 localhost kernel: printk: console [ttyS0] enabled
Feb 01 14:16:58 localhost kernel: ACPI: Core revision 20230331
Feb 01 14:16:58 localhost kernel: APIC: Switch to symmetric I/O mode setup
Feb 01 14:16:58 localhost kernel: x2apic enabled
Feb 01 14:16:58 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Feb 01 14:16:58 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Feb 01 14:16:58 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Feb 01 14:16:58 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Feb 01 14:16:58 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Feb 01 14:16:58 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Feb 01 14:16:58 localhost kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Feb 01 14:16:58 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Feb 01 14:16:58 localhost kernel: Spectre V2 : Mitigation: Retpolines
Feb 01 14:16:58 localhost kernel: RETBleed: Mitigation: untrained return thunk
Feb 01 14:16:58 localhost kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Feb 01 14:16:58 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb 01 14:16:58 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Feb 01 14:16:58 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Feb 01 14:16:58 localhost kernel: active return thunk: retbleed_return_thunk
Feb 01 14:16:58 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Feb 01 14:16:58 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb 01 14:16:58 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb 01 14:16:58 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb 01 14:16:58 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb 01 14:16:58 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Feb 01 14:16:58 localhost kernel: Freeing SMP alternatives memory: 40K
Feb 01 14:16:58 localhost kernel: pid_max: default: 32768 minimum: 301
Feb 01 14:16:58 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Feb 01 14:16:58 localhost kernel: landlock: Up and running.
Feb 01 14:16:58 localhost kernel: Yama: becoming mindful.
Feb 01 14:16:58 localhost kernel: SELinux:  Initializing.
Feb 01 14:16:58 localhost kernel: LSM support for eBPF active
Feb 01 14:16:58 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 01 14:16:58 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 01 14:16:58 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Feb 01 14:16:58 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Feb 01 14:16:58 localhost kernel: ... version:                0
Feb 01 14:16:58 localhost kernel: ... bit width:              48
Feb 01 14:16:58 localhost kernel: ... generic registers:      6
Feb 01 14:16:58 localhost kernel: ... value mask:             0000ffffffffffff
Feb 01 14:16:58 localhost kernel: ... max period:             00007fffffffffff
Feb 01 14:16:58 localhost kernel: ... fixed-purpose events:   0
Feb 01 14:16:58 localhost kernel: ... event mask:             000000000000003f
Feb 01 14:16:58 localhost kernel: signal: max sigframe size: 1776
Feb 01 14:16:58 localhost kernel: rcu: Hierarchical SRCU implementation.
Feb 01 14:16:58 localhost kernel: rcu:         Max phase no-delay instances is 400.
Feb 01 14:16:58 localhost kernel: smp: Bringing up secondary CPUs ...
Feb 01 14:16:58 localhost kernel: smpboot: x86: Booting SMP configuration:
Feb 01 14:16:58 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Feb 01 14:16:58 localhost kernel: smp: Brought up 1 node, 8 CPUs
Feb 01 14:16:58 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Feb 01 14:16:58 localhost kernel: node 0 deferred pages initialised in 10ms
Feb 01 14:16:58 localhost kernel: Memory: 7763776K/8388068K available (16384K kernel code, 5801K rwdata, 13928K rodata, 4196K init, 7192K bss, 618400K reserved, 0K cma-reserved)
Feb 01 14:16:58 localhost kernel: devtmpfs: initialized
Feb 01 14:16:58 localhost kernel: x86/mm: Memory block size: 128MB
Feb 01 14:16:58 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 01 14:16:58 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Feb 01 14:16:58 localhost kernel: pinctrl core: initialized pinctrl subsystem
Feb 01 14:16:58 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 01 14:16:58 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Feb 01 14:16:58 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 01 14:16:58 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 01 14:16:58 localhost kernel: audit: initializing netlink subsys (disabled)
Feb 01 14:16:58 localhost kernel: audit: type=2000 audit(1769955417.487:1): state=initialized audit_enabled=0 res=1
Feb 01 14:16:58 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Feb 01 14:16:58 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 01 14:16:58 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Feb 01 14:16:58 localhost kernel: cpuidle: using governor menu
Feb 01 14:16:58 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 01 14:16:58 localhost kernel: PCI: Using configuration type 1 for base access
Feb 01 14:16:58 localhost kernel: PCI: Using configuration type 1 for extended access
Feb 01 14:16:58 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb 01 14:16:58 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 01 14:16:58 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Feb 01 14:16:58 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 01 14:16:58 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Feb 01 14:16:58 localhost kernel: Demotion targets for Node 0: null
Feb 01 14:16:58 localhost kernel: cryptd: max_cpu_qlen set to 1000
Feb 01 14:16:58 localhost kernel: ACPI: Added _OSI(Module Device)
Feb 01 14:16:58 localhost kernel: ACPI: Added _OSI(Processor Device)
Feb 01 14:16:58 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 01 14:16:58 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 01 14:16:58 localhost kernel: ACPI: Interpreter enabled
Feb 01 14:16:58 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Feb 01 14:16:58 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Feb 01 14:16:58 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb 01 14:16:58 localhost kernel: PCI: Using E820 reservations for host bridge windows
Feb 01 14:16:58 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Feb 01 14:16:58 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 01 14:16:58 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [3] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [4] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [5] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [6] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [7] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [8] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [9] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [10] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [11] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [12] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [13] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [14] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [15] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [16] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [17] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [18] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [19] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [20] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [21] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [22] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [23] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [24] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [25] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [26] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [27] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [28] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [29] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [30] registered
Feb 01 14:16:58 localhost kernel: acpiphp: Slot [31] registered
Feb 01 14:16:58 localhost kernel: PCI host bridge to bus 0000:00
Feb 01 14:16:58 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb 01 14:16:58 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb 01 14:16:58 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb 01 14:16:58 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Feb 01 14:16:58 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Feb 01 14:16:58 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 01 14:16:58 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Feb 01 14:16:58 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Feb 01 14:16:58 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Feb 01 14:16:58 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Feb 01 14:16:58 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Feb 01 14:16:58 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Feb 01 14:16:58 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Feb 01 14:16:58 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Feb 01 14:16:58 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Feb 01 14:16:58 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Feb 01 14:16:58 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Feb 01 14:16:58 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Feb 01 14:16:58 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Feb 01 14:16:58 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Feb 01 14:16:58 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Feb 01 14:16:58 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Feb 01 14:16:58 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Feb 01 14:16:58 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Feb 01 14:16:58 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb 01 14:16:58 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb 01 14:16:58 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Feb 01 14:16:58 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Feb 01 14:16:58 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Feb 01 14:16:58 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Feb 01 14:16:58 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Feb 01 14:16:58 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Feb 01 14:16:58 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Feb 01 14:16:58 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Feb 01 14:16:58 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Feb 01 14:16:58 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Feb 01 14:16:58 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Feb 01 14:16:58 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Feb 01 14:16:58 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Feb 01 14:16:58 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Feb 01 14:16:58 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb 01 14:16:58 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb 01 14:16:58 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb 01 14:16:58 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb 01 14:16:58 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb 01 14:16:58 localhost kernel: iommu: Default domain type: Translated
Feb 01 14:16:58 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Feb 01 14:16:58 localhost kernel: SCSI subsystem initialized
Feb 01 14:16:58 localhost kernel: ACPI: bus type USB registered
Feb 01 14:16:58 localhost kernel: usbcore: registered new interface driver usbfs
Feb 01 14:16:58 localhost kernel: usbcore: registered new interface driver hub
Feb 01 14:16:58 localhost kernel: usbcore: registered new device driver usb
Feb 01 14:16:58 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Feb 01 14:16:58 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb 01 14:16:58 localhost kernel: PTP clock support registered
Feb 01 14:16:58 localhost kernel: EDAC MC: Ver: 3.0.0
Feb 01 14:16:58 localhost kernel: NetLabel: Initializing
Feb 01 14:16:58 localhost kernel: NetLabel:  domain hash size = 128
Feb 01 14:16:58 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Feb 01 14:16:58 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Feb 01 14:16:58 localhost kernel: PCI: Using ACPI for IRQ routing
Feb 01 14:16:58 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Feb 01 14:16:58 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Feb 01 14:16:58 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Feb 01 14:16:58 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Feb 01 14:16:58 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Feb 01 14:16:58 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb 01 14:16:58 localhost kernel: vgaarb: loaded
Feb 01 14:16:58 localhost kernel: clocksource: Switched to clocksource kvm-clock
Feb 01 14:16:58 localhost kernel: VFS: Disk quotas dquot_6.6.0
Feb 01 14:16:58 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 01 14:16:58 localhost kernel: pnp: PnP ACPI init
Feb 01 14:16:58 localhost kernel: pnp 00:03: [dma 2]
Feb 01 14:16:58 localhost kernel: pnp: PnP ACPI: found 5 devices
Feb 01 14:16:58 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb 01 14:16:58 localhost kernel: NET: Registered PF_INET protocol family
Feb 01 14:16:58 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb 01 14:16:58 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Feb 01 14:16:58 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 01 14:16:58 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 01 14:16:58 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Feb 01 14:16:58 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Feb 01 14:16:58 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Feb 01 14:16:58 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 01 14:16:58 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 01 14:16:58 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 01 14:16:58 localhost kernel: NET: Registered PF_XDP protocol family
Feb 01 14:16:58 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb 01 14:16:58 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb 01 14:16:58 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb 01 14:16:58 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Feb 01 14:16:58 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Feb 01 14:16:58 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Feb 01 14:16:58 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb 01 14:16:58 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Feb 01 14:16:58 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 22652 usecs
Feb 01 14:16:58 localhost kernel: PCI: CLS 0 bytes, default 64
Feb 01 14:16:58 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Feb 01 14:16:58 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Feb 01 14:16:58 localhost kernel: ACPI: bus type thunderbolt registered
Feb 01 14:16:58 localhost kernel: Trying to unpack rootfs image as initramfs...
Feb 01 14:16:58 localhost kernel: Initialise system trusted keyrings
Feb 01 14:16:58 localhost kernel: Key type blacklist registered
Feb 01 14:16:58 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Feb 01 14:16:58 localhost kernel: zbud: loaded
Feb 01 14:16:58 localhost kernel: integrity: Platform Keyring initialized
Feb 01 14:16:58 localhost kernel: integrity: Machine keyring initialized
Feb 01 14:16:58 localhost kernel: Freeing initrd memory: 88000K
Feb 01 14:16:58 localhost kernel: NET: Registered PF_ALG protocol family
Feb 01 14:16:58 localhost kernel: xor: automatically using best checksumming function   avx       
Feb 01 14:16:58 localhost kernel: Key type asymmetric registered
Feb 01 14:16:58 localhost kernel: Asymmetric key parser 'x509' registered
Feb 01 14:16:58 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Feb 01 14:16:58 localhost kernel: io scheduler mq-deadline registered
Feb 01 14:16:58 localhost kernel: io scheduler kyber registered
Feb 01 14:16:58 localhost kernel: io scheduler bfq registered
Feb 01 14:16:58 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Feb 01 14:16:58 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Feb 01 14:16:58 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Feb 01 14:16:58 localhost kernel: ACPI: button: Power Button [PWRF]
Feb 01 14:16:58 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Feb 01 14:16:58 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Feb 01 14:16:58 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Feb 01 14:16:58 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 01 14:16:58 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb 01 14:16:58 localhost kernel: Non-volatile memory driver v1.3
Feb 01 14:16:58 localhost kernel: rdac: device handler registered
Feb 01 14:16:58 localhost kernel: hp_sw: device handler registered
Feb 01 14:16:58 localhost kernel: emc: device handler registered
Feb 01 14:16:58 localhost kernel: alua: device handler registered
Feb 01 14:16:58 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Feb 01 14:16:58 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Feb 01 14:16:58 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Feb 01 14:16:58 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Feb 01 14:16:58 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Feb 01 14:16:58 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Feb 01 14:16:58 localhost kernel: usb usb1: Product: UHCI Host Controller
Feb 01 14:16:58 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-665.el9.x86_64 uhci_hcd
Feb 01 14:16:58 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Feb 01 14:16:58 localhost kernel: hub 1-0:1.0: USB hub found
Feb 01 14:16:58 localhost kernel: hub 1-0:1.0: 2 ports detected
Feb 01 14:16:58 localhost kernel: usbcore: registered new interface driver usbserial_generic
Feb 01 14:16:58 localhost kernel: usbserial: USB Serial support registered for generic
Feb 01 14:16:58 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb 01 14:16:58 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb 01 14:16:58 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb 01 14:16:58 localhost kernel: mousedev: PS/2 mouse device common for all mice
Feb 01 14:16:58 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Feb 01 14:16:58 localhost kernel: rtc_cmos 00:04: registered as rtc0
Feb 01 14:16:58 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-02-01T14:16:57 UTC (1769955417)
Feb 01 14:16:58 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Feb 01 14:16:58 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Feb 01 14:16:58 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 01 14:16:58 localhost kernel: usbcore: registered new interface driver usbhid
Feb 01 14:16:58 localhost kernel: usbhid: USB HID core driver
Feb 01 14:16:58 localhost kernel: drop_monitor: Initializing network drop monitor service
Feb 01 14:16:58 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Feb 01 14:16:58 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Feb 01 14:16:58 localhost kernel: Initializing XFRM netlink socket
Feb 01 14:16:58 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Feb 01 14:16:58 localhost kernel: NET: Registered PF_INET6 protocol family
Feb 01 14:16:58 localhost kernel: Segment Routing with IPv6
Feb 01 14:16:58 localhost kernel: NET: Registered PF_PACKET protocol family
Feb 01 14:16:58 localhost kernel: mpls_gso: MPLS GSO support
Feb 01 14:16:58 localhost kernel: IPI shorthand broadcast: enabled
Feb 01 14:16:58 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Feb 01 14:16:58 localhost kernel: AES CTR mode by8 optimization enabled
Feb 01 14:16:58 localhost kernel: sched_clock: Marking stable (889001740, 153862790)->(1136033970, -93169440)
Feb 01 14:16:58 localhost kernel: registered taskstats version 1
Feb 01 14:16:58 localhost kernel: Loading compiled-in X.509 certificates
Feb 01 14:16:58 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Feb 01 14:16:58 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Feb 01 14:16:58 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Feb 01 14:16:58 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Feb 01 14:16:58 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Feb 01 14:16:58 localhost kernel: Demotion targets for Node 0: null
Feb 01 14:16:58 localhost kernel: page_owner is disabled
Feb 01 14:16:58 localhost kernel: Key type .fscrypt registered
Feb 01 14:16:58 localhost kernel: Key type fscrypt-provisioning registered
Feb 01 14:16:58 localhost kernel: Key type big_key registered
Feb 01 14:16:58 localhost kernel: Key type encrypted registered
Feb 01 14:16:58 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 01 14:16:58 localhost kernel: Loading compiled-in module X.509 certificates
Feb 01 14:16:58 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Feb 01 14:16:58 localhost kernel: ima: Allocated hash algorithm: sha256
Feb 01 14:16:58 localhost kernel: ima: No architecture policies found
Feb 01 14:16:58 localhost kernel: evm: Initialising EVM extended attributes:
Feb 01 14:16:58 localhost kernel: evm: security.selinux
Feb 01 14:16:58 localhost kernel: evm: security.SMACK64 (disabled)
Feb 01 14:16:58 localhost kernel: evm: security.SMACK64EXEC (disabled)
Feb 01 14:16:58 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Feb 01 14:16:58 localhost kernel: evm: security.SMACK64MMAP (disabled)
Feb 01 14:16:58 localhost kernel: evm: security.apparmor (disabled)
Feb 01 14:16:58 localhost kernel: evm: security.ima
Feb 01 14:16:58 localhost kernel: evm: security.capability
Feb 01 14:16:58 localhost kernel: evm: HMAC attrs: 0x1
Feb 01 14:16:58 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Feb 01 14:16:58 localhost kernel: Running certificate verification RSA selftest
Feb 01 14:16:58 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Feb 01 14:16:58 localhost kernel: Running certificate verification ECDSA selftest
Feb 01 14:16:58 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Feb 01 14:16:58 localhost kernel: clk: Disabling unused clocks
Feb 01 14:16:58 localhost kernel: Freeing unused decrypted memory: 2028K
Feb 01 14:16:58 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Feb 01 14:16:58 localhost kernel: Write protecting the kernel read-only data: 30720k
Feb 01 14:16:58 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 408K
Feb 01 14:16:58 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Feb 01 14:16:58 localhost kernel: Run /init as init process
Feb 01 14:16:58 localhost kernel:   with arguments:
Feb 01 14:16:58 localhost kernel:     /init
Feb 01 14:16:58 localhost kernel:   with environment:
Feb 01 14:16:58 localhost kernel:     HOME=/
Feb 01 14:16:58 localhost kernel:     TERM=linux
Feb 01 14:16:58 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64
Feb 01 14:16:58 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 01 14:16:58 localhost systemd[1]: Detected virtualization kvm.
Feb 01 14:16:58 localhost systemd[1]: Detected architecture x86-64.
Feb 01 14:16:58 localhost systemd[1]: Running in initrd.
Feb 01 14:16:58 localhost systemd[1]: No hostname configured, using default hostname.
Feb 01 14:16:58 localhost systemd[1]: Hostname set to <localhost>.
Feb 01 14:16:58 localhost systemd[1]: Initializing machine ID from VM UUID.
Feb 01 14:16:58 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Feb 01 14:16:58 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Feb 01 14:16:58 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Feb 01 14:16:58 localhost kernel: usb 1-1: Manufacturer: QEMU
Feb 01 14:16:58 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Feb 01 14:16:58 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Feb 01 14:16:58 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Feb 01 14:16:58 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Feb 01 14:16:58 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Feb 01 14:16:58 localhost systemd[1]: Reached target Local Encrypted Volumes.
Feb 01 14:16:58 localhost systemd[1]: Reached target Initrd /usr File System.
Feb 01 14:16:58 localhost systemd[1]: Reached target Local File Systems.
Feb 01 14:16:58 localhost systemd[1]: Reached target Path Units.
Feb 01 14:16:58 localhost systemd[1]: Reached target Slice Units.
Feb 01 14:16:58 localhost systemd[1]: Reached target Swaps.
Feb 01 14:16:58 localhost systemd[1]: Reached target Timer Units.
Feb 01 14:16:58 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Feb 01 14:16:58 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Feb 01 14:16:58 localhost systemd[1]: Listening on Journal Socket.
Feb 01 14:16:58 localhost systemd[1]: Listening on udev Control Socket.
Feb 01 14:16:58 localhost systemd[1]: Listening on udev Kernel Socket.
Feb 01 14:16:58 localhost systemd[1]: Reached target Socket Units.
Feb 01 14:16:58 localhost systemd[1]: Starting Create List of Static Device Nodes...
Feb 01 14:16:58 localhost systemd[1]: Starting Journal Service...
Feb 01 14:16:58 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb 01 14:16:58 localhost systemd[1]: Starting Apply Kernel Variables...
Feb 01 14:16:58 localhost systemd[1]: Starting Create System Users...
Feb 01 14:16:58 localhost systemd[1]: Starting Setup Virtual Console...
Feb 01 14:16:58 localhost systemd[1]: Finished Create List of Static Device Nodes.
Feb 01 14:16:58 localhost systemd[1]: Finished Apply Kernel Variables.
Feb 01 14:16:58 localhost systemd[1]: Finished Create System Users.
Feb 01 14:16:58 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Feb 01 14:16:58 localhost systemd-journald[305]: Journal started
Feb 01 14:16:58 localhost systemd-journald[305]: Runtime Journal (/run/log/journal/072bb88ed455426ca85083903b041dc8) is 8.0M, max 153.6M, 145.6M free.
Feb 01 14:16:57 localhost systemd-sysusers[310]: Creating group 'users' with GID 100.
Feb 01 14:16:57 localhost systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Feb 01 14:16:58 localhost systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Feb 01 14:16:58 localhost systemd[1]: Started Journal Service.
Feb 01 14:16:58 localhost systemd[1]: Starting Create Volatile Files and Directories...
Feb 01 14:16:58 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Feb 01 14:16:58 localhost systemd[1]: Finished Create Volatile Files and Directories.
Feb 01 14:16:58 localhost systemd[1]: Finished Setup Virtual Console.
Feb 01 14:16:58 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Feb 01 14:16:58 localhost systemd[1]: Starting dracut cmdline hook...
Feb 01 14:16:58 localhost dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Feb 01 14:16:58 localhost dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 01 14:16:58 localhost systemd[1]: Finished dracut cmdline hook.
Feb 01 14:16:58 localhost systemd[1]: Starting dracut pre-udev hook...
Feb 01 14:16:58 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 01 14:16:58 localhost kernel: device-mapper: uevent: version 1.0.3
Feb 01 14:16:58 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Feb 01 14:16:58 localhost kernel: RPC: Registered named UNIX socket transport module.
Feb 01 14:16:58 localhost kernel: RPC: Registered udp transport module.
Feb 01 14:16:58 localhost kernel: RPC: Registered tcp transport module.
Feb 01 14:16:58 localhost kernel: RPC: Registered tcp-with-tls transport module.
Feb 01 14:16:58 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb 01 14:16:58 localhost rpc.statd[443]: Version 2.5.4 starting
Feb 01 14:16:58 localhost rpc.statd[443]: Initializing NSM state
Feb 01 14:16:58 localhost rpc.idmapd[448]: Setting log level to 0
Feb 01 14:16:58 localhost systemd[1]: Finished dracut pre-udev hook.
Feb 01 14:16:58 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb 01 14:16:58 localhost systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Feb 01 14:16:58 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb 01 14:16:58 localhost systemd[1]: Starting dracut pre-trigger hook...
Feb 01 14:16:58 localhost systemd[1]: Finished dracut pre-trigger hook.
Feb 01 14:16:58 localhost systemd[1]: Starting Coldplug All udev Devices...
Feb 01 14:16:58 localhost systemd[1]: Created slice Slice /system/modprobe.
Feb 01 14:16:58 localhost systemd[1]: Starting Load Kernel Module configfs...
Feb 01 14:16:58 localhost systemd[1]: Finished Coldplug All udev Devices.
Feb 01 14:16:58 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 01 14:16:58 localhost systemd[1]: Finished Load Kernel Module configfs.
Feb 01 14:16:58 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb 01 14:16:58 localhost systemd[1]: Reached target Network.
Feb 01 14:16:58 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb 01 14:16:58 localhost systemd[1]: Starting dracut initqueue hook...
Feb 01 14:16:58 localhost kernel: libata version 3.00 loaded.
Feb 01 14:16:58 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Feb 01 14:16:58 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Feb 01 14:16:58 localhost kernel: scsi host0: ata_piix
Feb 01 14:16:58 localhost kernel: scsi host1: ata_piix
Feb 01 14:16:58 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Feb 01 14:16:58 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Feb 01 14:16:58 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Feb 01 14:16:58 localhost kernel:  vda: vda1
Feb 01 14:16:58 localhost kernel: ata1: found unknown device (class 0)
Feb 01 14:16:58 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Feb 01 14:16:58 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Feb 01 14:16:58 localhost systemd-udevd[475]: Network interface NamePolicy= disabled on kernel command line.
Feb 01 14:16:58 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Feb 01 14:16:58 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Feb 01 14:16:58 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Feb 01 14:16:58 localhost systemd[1]: Found device /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Feb 01 14:16:58 localhost systemd[1]: Reached target Initrd Root Device.
Feb 01 14:16:58 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Feb 01 14:16:58 localhost systemd[1]: Finished dracut initqueue hook.
Feb 01 14:16:58 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Feb 01 14:16:58 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Feb 01 14:16:58 localhost systemd[1]: Reached target Remote File Systems.
Feb 01 14:16:58 localhost systemd[1]: Starting dracut pre-mount hook...
Feb 01 14:16:58 localhost systemd[1]: Mounting Kernel Configuration File System...
Feb 01 14:16:59 localhost systemd[1]: Finished dracut pre-mount hook.
Feb 01 14:16:59 localhost systemd[1]: Mounted Kernel Configuration File System.
Feb 01 14:16:59 localhost systemd[1]: Reached target System Initialization.
Feb 01 14:16:59 localhost systemd[1]: Reached target Basic System.
Feb 01 14:16:59 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8...
Feb 01 14:16:59 localhost systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Feb 01 14:16:59 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Feb 01 14:16:59 localhost systemd[1]: Mounting /sysroot...
Feb 01 14:16:59 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Feb 01 14:16:59 localhost kernel: XFS (vda1): Mounting V5 Filesystem 822f14ea-6e7e-41df-b0d8-fbe282d9ded8
Feb 01 14:16:59 localhost kernel: XFS (vda1): Ending clean mount
Feb 01 14:16:59 localhost systemd[1]: Mounted /sysroot.
Feb 01 14:16:59 localhost systemd[1]: Reached target Initrd Root File System.
Feb 01 14:16:59 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Feb 01 14:16:59 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 01 14:16:59 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Feb 01 14:16:59 localhost systemd[1]: Reached target Initrd File Systems.
Feb 01 14:16:59 localhost systemd[1]: Reached target Initrd Default Target.
Feb 01 14:16:59 localhost systemd[1]: Starting dracut mount hook...
Feb 01 14:16:59 localhost systemd[1]: Finished dracut mount hook.
Feb 01 14:16:59 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Feb 01 14:16:59 localhost rpc.idmapd[448]: exiting on signal 15
Feb 01 14:16:59 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Feb 01 14:16:59 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Feb 01 14:16:59 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Feb 01 14:16:59 localhost systemd[1]: Stopped target Network.
Feb 01 14:16:59 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Feb 01 14:16:59 localhost systemd[1]: Stopped target Timer Units.
Feb 01 14:16:59 localhost systemd[1]: dbus.socket: Deactivated successfully.
Feb 01 14:16:59 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Feb 01 14:16:59 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 01 14:16:59 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Feb 01 14:16:59 localhost systemd[1]: Stopped target Initrd Default Target.
Feb 01 14:16:59 localhost systemd[1]: Stopped target Basic System.
Feb 01 14:16:59 localhost systemd[1]: Stopped target Initrd Root Device.
Feb 01 14:16:59 localhost systemd[1]: Stopped target Initrd /usr File System.
Feb 01 14:16:59 localhost systemd[1]: Stopped target Path Units.
Feb 01 14:16:59 localhost systemd[1]: Stopped target Remote File Systems.
Feb 01 14:16:59 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Feb 01 14:16:59 localhost systemd[1]: Stopped target Slice Units.
Feb 01 14:16:59 localhost systemd[1]: Stopped target Socket Units.
Feb 01 14:16:59 localhost systemd[1]: Stopped target System Initialization.
Feb 01 14:16:59 localhost systemd[1]: Stopped target Local File Systems.
Feb 01 14:16:59 localhost systemd[1]: Stopped target Swaps.
Feb 01 14:16:59 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Feb 01 14:16:59 localhost systemd[1]: Stopped dracut mount hook.
Feb 01 14:16:59 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 01 14:16:59 localhost systemd[1]: Stopped dracut pre-mount hook.
Feb 01 14:16:59 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Feb 01 14:16:59 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 01 14:16:59 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Feb 01 14:16:59 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 01 14:16:59 localhost systemd[1]: Stopped dracut initqueue hook.
Feb 01 14:16:59 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 01 14:16:59 localhost systemd[1]: Stopped Apply Kernel Variables.
Feb 01 14:16:59 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 01 14:16:59 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Feb 01 14:16:59 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 01 14:16:59 localhost systemd[1]: Stopped Coldplug All udev Devices.
Feb 01 14:16:59 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 01 14:16:59 localhost systemd[1]: Stopped dracut pre-trigger hook.
Feb 01 14:16:59 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Feb 01 14:16:59 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 01 14:16:59 localhost systemd[1]: Stopped Setup Virtual Console.
Feb 01 14:16:59 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 01 14:16:59 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Feb 01 14:16:59 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 01 14:16:59 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Feb 01 14:16:59 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 01 14:16:59 localhost systemd[1]: Closed udev Control Socket.
Feb 01 14:16:59 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 01 14:16:59 localhost systemd[1]: Closed udev Kernel Socket.
Feb 01 14:16:59 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 01 14:16:59 localhost systemd[1]: Stopped dracut pre-udev hook.
Feb 01 14:16:59 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 01 14:16:59 localhost systemd[1]: Stopped dracut cmdline hook.
Feb 01 14:16:59 localhost systemd[1]: Starting Cleanup udev Database...
Feb 01 14:16:59 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 01 14:16:59 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Feb 01 14:16:59 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 01 14:16:59 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Feb 01 14:16:59 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Feb 01 14:16:59 localhost systemd[1]: Stopped Create System Users.
Feb 01 14:16:59 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 01 14:16:59 localhost systemd[1]: Finished Cleanup udev Database.
Feb 01 14:16:59 localhost systemd[1]: Reached target Switch Root.
Feb 01 14:16:59 localhost systemd[1]: Starting Switch Root...
Feb 01 14:16:59 localhost systemd[1]: Switching root.
Feb 01 14:16:59 localhost systemd-journald[305]: Journal stopped
Feb 01 14:17:00 localhost systemd-journald[305]: Received SIGTERM from PID 1 (systemd).
Feb 01 14:17:00 localhost kernel: audit: type=1404 audit(1769955419.901:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Feb 01 14:17:00 localhost kernel: SELinux:  policy capability network_peer_controls=1
Feb 01 14:17:00 localhost kernel: SELinux:  policy capability open_perms=1
Feb 01 14:17:00 localhost kernel: SELinux:  policy capability extended_socket_class=1
Feb 01 14:17:00 localhost kernel: SELinux:  policy capability always_check_network=0
Feb 01 14:17:00 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 01 14:17:00 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 01 14:17:00 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 01 14:17:00 localhost kernel: audit: type=1403 audit(1769955419.999:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 01 14:17:00 localhost systemd[1]: Successfully loaded SELinux policy in 100.948ms.
Feb 01 14:17:00 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 36.725ms.
Feb 01 14:17:00 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 01 14:17:00 localhost systemd[1]: Detected virtualization kvm.
Feb 01 14:17:00 localhost systemd[1]: Detected architecture x86-64.
Feb 01 14:17:00 localhost systemd-rc-local-generator[637]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:17:00 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Feb 01 14:17:00 localhost systemd[1]: Stopped Switch Root.
Feb 01 14:17:00 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 01 14:17:00 localhost systemd[1]: Created slice Slice /system/getty.
Feb 01 14:17:00 localhost systemd[1]: Created slice Slice /system/serial-getty.
Feb 01 14:17:00 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Feb 01 14:17:00 localhost systemd[1]: Created slice User and Session Slice.
Feb 01 14:17:00 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Feb 01 14:17:00 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Feb 01 14:17:00 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Feb 01 14:17:00 localhost systemd[1]: Reached target Local Encrypted Volumes.
Feb 01 14:17:00 localhost systemd[1]: Stopped target Switch Root.
Feb 01 14:17:00 localhost systemd[1]: Stopped target Initrd File Systems.
Feb 01 14:17:00 localhost systemd[1]: Stopped target Initrd Root File System.
Feb 01 14:17:00 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Feb 01 14:17:00 localhost systemd[1]: Reached target Path Units.
Feb 01 14:17:00 localhost systemd[1]: Reached target rpc_pipefs.target.
Feb 01 14:17:00 localhost systemd[1]: Reached target Slice Units.
Feb 01 14:17:00 localhost systemd[1]: Reached target Swaps.
Feb 01 14:17:00 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Feb 01 14:17:00 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Feb 01 14:17:00 localhost systemd[1]: Reached target RPC Port Mapper.
Feb 01 14:17:00 localhost systemd[1]: Listening on Process Core Dump Socket.
Feb 01 14:17:00 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Feb 01 14:17:00 localhost systemd[1]: Listening on udev Control Socket.
Feb 01 14:17:00 localhost systemd[1]: Listening on udev Kernel Socket.
Feb 01 14:17:00 localhost systemd[1]: Mounting Huge Pages File System...
Feb 01 14:17:00 localhost systemd[1]: Mounting POSIX Message Queue File System...
Feb 01 14:17:00 localhost systemd[1]: Mounting Kernel Debug File System...
Feb 01 14:17:00 localhost systemd[1]: Mounting Kernel Trace File System...
Feb 01 14:17:00 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb 01 14:17:00 localhost systemd[1]: Starting Create List of Static Device Nodes...
Feb 01 14:17:00 localhost systemd[1]: Starting Load Kernel Module configfs...
Feb 01 14:17:00 localhost systemd[1]: Starting Load Kernel Module drm...
Feb 01 14:17:00 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Feb 01 14:17:00 localhost systemd[1]: Starting Load Kernel Module fuse...
Feb 01 14:17:00 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Feb 01 14:17:00 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Feb 01 14:17:00 localhost systemd[1]: Stopped File System Check on Root Device.
Feb 01 14:17:00 localhost systemd[1]: Stopped Journal Service.
Feb 01 14:17:00 localhost systemd[1]: Starting Journal Service...
Feb 01 14:17:00 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb 01 14:17:00 localhost systemd[1]: Starting Generate network units from Kernel command line...
Feb 01 14:17:00 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 01 14:17:00 localhost kernel: fuse: init (API version 7.37)
Feb 01 14:17:00 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Feb 01 14:17:00 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 01 14:17:00 localhost systemd[1]: Starting Apply Kernel Variables...
Feb 01 14:17:00 localhost systemd[1]: Starting Coldplug All udev Devices...
Feb 01 14:17:00 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Feb 01 14:17:00 localhost systemd[1]: Mounted Huge Pages File System.
Feb 01 14:17:00 localhost systemd-journald[678]: Journal started
Feb 01 14:17:00 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Feb 01 14:17:00 localhost systemd[1]: Mounted POSIX Message Queue File System.
Feb 01 14:17:00 localhost systemd[1]: Queued start job for default target Multi-User System.
Feb 01 14:17:00 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 01 14:17:00 localhost systemd[1]: Started Journal Service.
Feb 01 14:17:00 localhost systemd[1]: Mounted Kernel Debug File System.
Feb 01 14:17:00 localhost systemd[1]: Mounted Kernel Trace File System.
Feb 01 14:17:00 localhost systemd[1]: Finished Create List of Static Device Nodes.
Feb 01 14:17:00 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 01 14:17:00 localhost systemd[1]: Finished Load Kernel Module configfs.
Feb 01 14:17:00 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 01 14:17:00 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Feb 01 14:17:00 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 01 14:17:00 localhost systemd[1]: Finished Load Kernel Module fuse.
Feb 01 14:17:00 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Feb 01 14:17:00 localhost systemd[1]: Finished Generate network units from Kernel command line.
Feb 01 14:17:00 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Feb 01 14:17:00 localhost systemd[1]: Finished Apply Kernel Variables.
Feb 01 14:17:00 localhost systemd[1]: Mounting FUSE Control File System...
Feb 01 14:17:00 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb 01 14:17:00 localhost systemd[1]: Starting Rebuild Hardware Database...
Feb 01 14:17:00 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Feb 01 14:17:00 localhost kernel: ACPI: bus type drm_connector registered
Feb 01 14:17:00 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 01 14:17:00 localhost systemd[1]: Starting Load/Save OS Random Seed...
Feb 01 14:17:00 localhost systemd[1]: Starting Create System Users...
Feb 01 14:17:00 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 01 14:17:00 localhost systemd[1]: Finished Load Kernel Module drm.
Feb 01 14:17:00 localhost systemd[1]: Mounted FUSE Control File System.
Feb 01 14:17:00 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Feb 01 14:17:00 localhost systemd-journald[678]: Received client request to flush runtime journal.
Feb 01 14:17:00 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Feb 01 14:17:00 localhost systemd[1]: Finished Load/Save OS Random Seed.
Feb 01 14:17:00 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb 01 14:17:00 localhost systemd[1]: Finished Create System Users.
Feb 01 14:17:00 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Feb 01 14:17:00 localhost systemd[1]: Finished Coldplug All udev Devices.
Feb 01 14:17:00 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Feb 01 14:17:00 localhost systemd[1]: Reached target Preparation for Local File Systems.
Feb 01 14:17:00 localhost systemd[1]: Reached target Local File Systems.
Feb 01 14:17:00 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Feb 01 14:17:00 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Feb 01 14:17:00 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 01 14:17:00 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Feb 01 14:17:00 localhost systemd[1]: Starting Automatic Boot Loader Update...
Feb 01 14:17:00 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Feb 01 14:17:00 localhost systemd[1]: Starting Create Volatile Files and Directories...
Feb 01 14:17:00 localhost bootctl[695]: Couldn't find EFI system partition, skipping.
Feb 01 14:17:00 localhost systemd[1]: Finished Automatic Boot Loader Update.
Feb 01 14:17:00 localhost systemd[1]: Finished Create Volatile Files and Directories.
Feb 01 14:17:00 localhost systemd[1]: Starting Security Auditing Service...
Feb 01 14:17:00 localhost systemd[1]: Starting RPC Bind...
Feb 01 14:17:00 localhost systemd[1]: Starting Rebuild Journal Catalog...
Feb 01 14:17:00 localhost auditd[701]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Feb 01 14:17:00 localhost auditd[701]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Feb 01 14:17:00 localhost systemd[1]: Finished Rebuild Journal Catalog.
Feb 01 14:17:00 localhost systemd[1]: Started RPC Bind.
Feb 01 14:17:00 localhost augenrules[706]: /sbin/augenrules: No change
Feb 01 14:17:00 localhost augenrules[721]: No rules
Feb 01 14:17:00 localhost augenrules[721]: enabled 1
Feb 01 14:17:00 localhost augenrules[721]: failure 1
Feb 01 14:17:00 localhost augenrules[721]: pid 701
Feb 01 14:17:00 localhost augenrules[721]: rate_limit 0
Feb 01 14:17:00 localhost augenrules[721]: backlog_limit 8192
Feb 01 14:17:00 localhost augenrules[721]: lost 0
Feb 01 14:17:00 localhost augenrules[721]: backlog 3
Feb 01 14:17:00 localhost augenrules[721]: backlog_wait_time 60000
Feb 01 14:17:00 localhost augenrules[721]: backlog_wait_time_actual 0
Feb 01 14:17:00 localhost augenrules[721]: enabled 1
Feb 01 14:17:00 localhost augenrules[721]: failure 1
Feb 01 14:17:00 localhost augenrules[721]: pid 701
Feb 01 14:17:00 localhost augenrules[721]: rate_limit 0
Feb 01 14:17:00 localhost augenrules[721]: backlog_limit 8192
Feb 01 14:17:00 localhost augenrules[721]: lost 0
Feb 01 14:17:00 localhost augenrules[721]: backlog 4
Feb 01 14:17:00 localhost augenrules[721]: backlog_wait_time 60000
Feb 01 14:17:00 localhost augenrules[721]: backlog_wait_time_actual 0
Feb 01 14:17:00 localhost augenrules[721]: enabled 1
Feb 01 14:17:00 localhost augenrules[721]: failure 1
Feb 01 14:17:00 localhost augenrules[721]: pid 701
Feb 01 14:17:00 localhost augenrules[721]: rate_limit 0
Feb 01 14:17:00 localhost augenrules[721]: backlog_limit 8192
Feb 01 14:17:00 localhost augenrules[721]: lost 0
Feb 01 14:17:00 localhost augenrules[721]: backlog 4
Feb 01 14:17:00 localhost augenrules[721]: backlog_wait_time 60000
Feb 01 14:17:00 localhost augenrules[721]: backlog_wait_time_actual 0
Feb 01 14:17:00 localhost systemd[1]: Started Security Auditing Service.
Feb 01 14:17:00 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Feb 01 14:17:00 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Feb 01 14:17:00 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Feb 01 14:17:01 localhost systemd[1]: Finished Rebuild Hardware Database.
Feb 01 14:17:01 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb 01 14:17:01 localhost systemd[1]: Starting Update is Completed...
Feb 01 14:17:01 localhost systemd[1]: Finished Update is Completed.
Feb 01 14:17:01 localhost systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Feb 01 14:17:01 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb 01 14:17:01 localhost systemd[1]: Reached target System Initialization.
Feb 01 14:17:01 localhost systemd[1]: Started dnf makecache --timer.
Feb 01 14:17:01 localhost systemd[1]: Started Daily rotation of log files.
Feb 01 14:17:01 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Feb 01 14:17:01 localhost systemd[1]: Reached target Timer Units.
Feb 01 14:17:01 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Feb 01 14:17:01 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Feb 01 14:17:01 localhost systemd[1]: Reached target Socket Units.
Feb 01 14:17:01 localhost systemd[1]: Starting D-Bus System Message Bus...
Feb 01 14:17:01 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 01 14:17:01 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Feb 01 14:17:01 localhost systemd[1]: Starting Load Kernel Module configfs...
Feb 01 14:17:01 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 01 14:17:01 localhost systemd[1]: Finished Load Kernel Module configfs.
Feb 01 14:17:01 localhost systemd-udevd[738]: Network interface NamePolicy= disabled on kernel command line.
Feb 01 14:17:01 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Feb 01 14:17:01 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Feb 01 14:17:01 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Feb 01 14:17:01 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Feb 01 14:17:01 localhost systemd[1]: Started D-Bus System Message Bus.
Feb 01 14:17:01 localhost systemd[1]: Reached target Basic System.
Feb 01 14:17:01 localhost dbus-broker-lau[765]: Ready
Feb 01 14:17:01 localhost systemd[1]: Starting NTP client/server...
Feb 01 14:17:01 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Feb 01 14:17:01 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Feb 01 14:17:01 localhost systemd[1]: Starting IPv4 firewall with iptables...
Feb 01 14:17:01 localhost systemd[1]: Started irqbalance daemon.
Feb 01 14:17:01 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Feb 01 14:17:01 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 01 14:17:01 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 01 14:17:01 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 01 14:17:01 localhost systemd[1]: Reached target sshd-keygen.target.
Feb 01 14:17:01 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Feb 01 14:17:01 localhost systemd[1]: Reached target User and Group Name Lookups.
Feb 01 14:17:01 localhost systemd[1]: Starting User Login Management...
Feb 01 14:17:01 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Feb 01 14:17:01 localhost systemd-logind[786]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 01 14:17:01 localhost systemd-logind[786]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb 01 14:17:01 localhost systemd-logind[786]: New seat seat0.
Feb 01 14:17:01 localhost systemd[1]: Started User Login Management.
Feb 01 14:17:01 localhost chronyd[800]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb 01 14:17:01 localhost chronyd[800]: Loaded 0 symmetric keys
Feb 01 14:17:01 localhost chronyd[800]: Using right/UTC timezone to obtain leap second data
Feb 01 14:17:01 localhost chronyd[800]: Loaded seccomp filter (level 2)
Feb 01 14:17:01 localhost systemd[1]: Started NTP client/server.
Feb 01 14:17:01 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Feb 01 14:17:01 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Feb 01 14:17:01 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Feb 01 14:17:01 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Feb 01 14:17:01 localhost kernel: Console: switching to colour dummy device 80x25
Feb 01 14:17:01 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Feb 01 14:17:01 localhost kernel: [drm] features: -context_init
Feb 01 14:17:01 localhost kernel: [drm] number of scanouts: 1
Feb 01 14:17:01 localhost kernel: [drm] number of cap sets: 0
Feb 01 14:17:01 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Feb 01 14:17:01 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Feb 01 14:17:01 localhost kernel: Console: switching to colour frame buffer device 128x48
Feb 01 14:17:01 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Feb 01 14:17:01 localhost kernel: kvm_amd: TSC scaling supported
Feb 01 14:17:01 localhost kernel: kvm_amd: Nested Virtualization enabled
Feb 01 14:17:01 localhost kernel: kvm_amd: Nested Paging enabled
Feb 01 14:17:01 localhost kernel: kvm_amd: LBR virtualization supported
Feb 01 14:17:01 localhost iptables.init[779]: iptables: Applying firewall rules: [  OK  ]
Feb 01 14:17:01 localhost systemd[1]: Finished IPv4 firewall with iptables.
Feb 01 14:17:01 localhost cloud-init[837]: Cloud-init v. 24.4-8.el9 running 'init-local' at Sun, 01 Feb 2026 14:17:01 +0000. Up 5.08 seconds.
Feb 01 14:17:01 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Feb 01 14:17:01 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Feb 01 14:17:01 localhost systemd[1]: run-cloud\x2dinit-tmp-tmph5o6a1oq.mount: Deactivated successfully.
Feb 01 14:17:01 localhost systemd[1]: Starting Hostname Service...
Feb 01 14:17:02 localhost systemd[1]: Started Hostname Service.
Feb 01 14:17:02 np0005604375.novalocal systemd-hostnamed[851]: Hostname set to <np0005604375.novalocal> (static)
Feb 01 14:17:02 np0005604375.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Feb 01 14:17:02 np0005604375.novalocal systemd[1]: Reached target Preparation for Network.
Feb 01 14:17:02 np0005604375.novalocal systemd[1]: Starting Network Manager...
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.1953] NetworkManager (version 1.54.3-2.el9) is starting... (boot:bc6eed0e-afac-49e7-b313-e00c329dc99a)
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.1957] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2120] manager[0x563dd8797000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2163] hostname: hostname: using hostnamed
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2164] hostname: static hostname changed from (none) to "np0005604375.novalocal"
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2167] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2261] manager[0x563dd8797000]: rfkill: Wi-Fi hardware radio set enabled
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2262] manager[0x563dd8797000]: rfkill: WWAN hardware radio set enabled
Feb 01 14:17:02 np0005604375.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2335] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2336] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2336] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2337] manager: Networking is enabled by state file
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2338] settings: Loaded settings plugin: keyfile (internal)
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2360] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2381] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2394] dhcp: init: Using DHCP client 'internal'
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2398] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2407] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2417] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2425] device (lo): Activation: starting connection 'lo' (993b83ea-ade5-4a5e-93d7-372f4fe03bae)
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2431] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2434] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2453] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2456] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2458] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2459] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2461] device (eth0): carrier: link connected
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2463] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2468] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2473] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2476] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2476] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2478] manager: NetworkManager state is now CONNECTING
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2479] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 01 14:17:02 np0005604375.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2485] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2487] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 01 14:17:02 np0005604375.novalocal systemd[1]: Started Network Manager.
Feb 01 14:17:02 np0005604375.novalocal systemd[1]: Reached target Network.
Feb 01 14:17:02 np0005604375.novalocal systemd[1]: Starting Network Manager Wait Online...
Feb 01 14:17:02 np0005604375.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Feb 01 14:17:02 np0005604375.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2649] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2652] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.2660] device (lo): Activation: successful, device activated.
Feb 01 14:17:02 np0005604375.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Feb 01 14:17:02 np0005604375.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb 01 14:17:02 np0005604375.novalocal systemd[1]: Reached target NFS client services.
Feb 01 14:17:02 np0005604375.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Feb 01 14:17:02 np0005604375.novalocal systemd[1]: Reached target Remote File Systems.
Feb 01 14:17:02 np0005604375.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.6107] dhcp4 (eth0): state changed new lease, address=38.102.83.238
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.6115] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.6129] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.6146] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.6147] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.6150] manager: NetworkManager state is now CONNECTED_SITE
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.6152] device (eth0): Activation: successful, device activated.
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.6157] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 01 14:17:02 np0005604375.novalocal NetworkManager[855]: <info>  [1769955422.6160] manager: startup complete
Feb 01 14:17:02 np0005604375.novalocal systemd[1]: Finished Network Manager Wait Online.
Feb 01 14:17:02 np0005604375.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: Cloud-init v. 24.4-8.el9 running 'init' at Sun, 01 Feb 2026 14:17:02 +0000. Up 6.23 seconds.
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: | Device |  Up  |           Address           |      Mask     | Scope  |     Hw-Address    |
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: |  eth0  | True |        38.102.83.238        | 255.255.255.0 | global | fa:16:3e:72:09:b3 |
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: |  eth0  | True | fe80::f816:3eff:fe72:9b3/64 |       .       |  link  | fa:16:3e:72:09:b3 |
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: |   lo   | True |          127.0.0.1          |   255.0.0.0   |  host  |         .         |
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: |   lo   | True |           ::1/128           |       .       |  host  |         .         |
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Feb 01 14:17:02 np0005604375.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 01 14:17:03 np0005604375.novalocal useradd[984]: new group: name=cloud-user, GID=1001
Feb 01 14:17:03 np0005604375.novalocal useradd[984]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Feb 01 14:17:03 np0005604375.novalocal useradd[984]: add 'cloud-user' to group 'adm'
Feb 01 14:17:03 np0005604375.novalocal useradd[984]: add 'cloud-user' to group 'systemd-journal'
Feb 01 14:17:03 np0005604375.novalocal useradd[984]: add 'cloud-user' to shadow group 'adm'
Feb 01 14:17:03 np0005604375.novalocal useradd[984]: add 'cloud-user' to shadow group 'systemd-journal'
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: Generating public/private rsa key pair.
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: The key fingerprint is:
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: SHA256:A+msEEuiyQBEA3Ixk/vsCiONr46beOVvjEwYv2+OMWQ root@np0005604375.novalocal
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: The key's randomart image is:
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: +---[RSA 3072]----+
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |B+=o             |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |o.oo   .         |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |o o.  o          |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |++oo o .         |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |o.o*E o S        |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: | o.+*.   .       |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |= .*++           |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |+=. *++          |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |B=o..*+          |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: +----[SHA256]-----+
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: Generating public/private ecdsa key pair.
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: The key fingerprint is:
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: SHA256:jnt6q868MesRE3MhlqRPNNvZO6KBM5BPOpIOa3ZjOWQ root@np0005604375.novalocal
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: The key's randomart image is:
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: +---[ECDSA 256]---+
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |     .*..        |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |   . +.= +       |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |  o o = + .      |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: | . = + +   .     |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |+ oE= = S o      |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |ooo..o B . .     |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |.+ *  * .        |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |o o oo *o        |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |     o@*..       |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: +----[SHA256]-----+
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: Generating public/private ed25519 key pair.
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: The key fingerprint is:
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: SHA256:nt6iE9ODYt14sYfLWBAJM6sCuxdr++azWEuqgO2OGow root@np0005604375.novalocal
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: The key's randomart image is:
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: +--[ED25519 256]--+
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |    +. .         |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |     +o          |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |.   .  .         |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |.. .  . .        |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |o o  . *S+       |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |++ oo *.O..      |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |E.=.o. Bo+       |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |.* *o.o.+.       |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: |=o*+=o.o...      |
Feb 01 14:17:03 np0005604375.novalocal cloud-init[918]: +----[SHA256]-----+
Feb 01 14:17:04 np0005604375.novalocal sm-notify[1000]: Version 2.5.4 starting
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Reached target Cloud-config availability.
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Reached target Network is Online.
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Starting Crash recovery kernel arming...
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Starting System Logging Service...
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Starting OpenSSH server daemon...
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Starting Permit User Sessions...
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Started Notify NFS peers of a restart.
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Finished Permit User Sessions.
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Started Command Scheduler.
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Started Getty on tty1.
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Started Serial Getty on ttyS0.
Feb 01 14:17:04 np0005604375.novalocal crond[1005]: (CRON) STARTUP (1.5.7)
Feb 01 14:17:04 np0005604375.novalocal crond[1005]: (CRON) INFO (Syslog will be used instead of sendmail.)
Feb 01 14:17:04 np0005604375.novalocal crond[1005]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 85% if used.)
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Reached target Login Prompts.
Feb 01 14:17:04 np0005604375.novalocal crond[1005]: (CRON) INFO (running with inotify support)
Feb 01 14:17:04 np0005604375.novalocal sshd[1002]: Server listening on 0.0.0.0 port 22.
Feb 01 14:17:04 np0005604375.novalocal sshd[1002]: Server listening on :: port 22.
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Started OpenSSH server daemon.
Feb 01 14:17:04 np0005604375.novalocal rsyslogd[1001]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1001" x-info="https://www.rsyslog.com"] start
Feb 01 14:17:04 np0005604375.novalocal rsyslogd[1001]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Started System Logging Service.
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Reached target Multi-User System.
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Feb 01 14:17:04 np0005604375.novalocal rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 01 14:17:04 np0005604375.novalocal kdumpctl[1010]: kdump: No kdump initial ramdisk found.
Feb 01 14:17:04 np0005604375.novalocal kdumpctl[1010]: kdump: Rebuilding /boot/initramfs-5.14.0-665.el9.x86_64kdump.img
Feb 01 14:17:04 np0005604375.novalocal cloud-init[1151]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Sun, 01 Feb 2026 14:17:04 +0000. Up 7.69 seconds.
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Feb 01 14:17:04 np0005604375.novalocal dracut[1261]: dracut-057-102.git20250818.el9
Feb 01 14:17:04 np0005604375.novalocal dracut[1263]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-665.el9.x86_64kdump.img 5.14.0-665.el9.x86_64
Feb 01 14:17:04 np0005604375.novalocal cloud-init[1311]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Sun, 01 Feb 2026 14:17:04 +0000. Up 8.02 seconds.
Feb 01 14:17:04 np0005604375.novalocal cloud-init[1333]: #############################################################
Feb 01 14:17:04 np0005604375.novalocal cloud-init[1334]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Feb 01 14:17:04 np0005604375.novalocal cloud-init[1336]: 256 SHA256:jnt6q868MesRE3MhlqRPNNvZO6KBM5BPOpIOa3ZjOWQ root@np0005604375.novalocal (ECDSA)
Feb 01 14:17:04 np0005604375.novalocal cloud-init[1338]: 256 SHA256:nt6iE9ODYt14sYfLWBAJM6sCuxdr++azWEuqgO2OGow root@np0005604375.novalocal (ED25519)
Feb 01 14:17:04 np0005604375.novalocal cloud-init[1340]: 3072 SHA256:A+msEEuiyQBEA3Ixk/vsCiONr46beOVvjEwYv2+OMWQ root@np0005604375.novalocal (RSA)
Feb 01 14:17:04 np0005604375.novalocal cloud-init[1341]: -----END SSH HOST KEY FINGERPRINTS-----
Feb 01 14:17:04 np0005604375.novalocal cloud-init[1342]: #############################################################
Feb 01 14:17:04 np0005604375.novalocal cloud-init[1311]: Cloud-init v. 24.4-8.el9 finished at Sun, 01 Feb 2026 14:17:04 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 8.19 seconds
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Feb 01 14:17:04 np0005604375.novalocal systemd[1]: Reached target Cloud-init target.
Feb 01 14:17:04 np0005604375.novalocal sshd-session[1349]: Connection reset by 38.102.83.114 port 48656 [preauth]
Feb 01 14:17:04 np0005604375.novalocal sshd-session[1359]: Unable to negotiate with 38.102.83.114 port 48664: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Feb 01 14:17:04 np0005604375.novalocal sshd-session[1368]: Connection reset by 38.102.83.114 port 48668 [preauth]
Feb 01 14:17:04 np0005604375.novalocal sshd-session[1379]: Unable to negotiate with 38.102.83.114 port 48674: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Feb 01 14:17:04 np0005604375.novalocal sshd-session[1384]: Unable to negotiate with 38.102.83.114 port 48684: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Feb 01 14:17:05 np0005604375.novalocal sshd-session[1400]: Unable to negotiate with 38.102.83.114 port 48724: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Feb 01 14:17:05 np0005604375.novalocal sshd-session[1406]: Unable to negotiate with 38.102.83.114 port 48738: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Feb 01 14:17:05 np0005604375.novalocal sshd-session[1389]: Connection closed by 38.102.83.114 port 48696 [preauth]
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Feb 01 14:17:05 np0005604375.novalocal sshd-session[1394]: Connection closed by 38.102.83.114 port 48708 [preauth]
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: Module 'resume' will not be installed, because it's in the list to be omitted!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: memstrack is not available
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: memstrack is not available
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb 01 14:17:05 np0005604375.novalocal dracut[1263]: *** Including module: systemd ***
Feb 01 14:17:06 np0005604375.novalocal dracut[1263]: *** Including module: fips ***
Feb 01 14:17:06 np0005604375.novalocal dracut[1263]: *** Including module: systemd-initrd ***
Feb 01 14:17:06 np0005604375.novalocal dracut[1263]: *** Including module: i18n ***
Feb 01 14:17:06 np0005604375.novalocal dracut[1263]: *** Including module: drm ***
Feb 01 14:17:06 np0005604375.novalocal dracut[1263]: *** Including module: prefixdevname ***
Feb 01 14:17:06 np0005604375.novalocal dracut[1263]: *** Including module: kernel-modules ***
Feb 01 14:17:06 np0005604375.novalocal kernel: block vda: the capability attribute has been deprecated.
Feb 01 14:17:07 np0005604375.novalocal dracut[1263]: *** Including module: kernel-modules-extra ***
Feb 01 14:17:07 np0005604375.novalocal dracut[1263]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Feb 01 14:17:07 np0005604375.novalocal dracut[1263]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Feb 01 14:17:07 np0005604375.novalocal dracut[1263]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Feb 01 14:17:07 np0005604375.novalocal dracut[1263]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Feb 01 14:17:07 np0005604375.novalocal dracut[1263]: *** Including module: qemu ***
Feb 01 14:17:07 np0005604375.novalocal dracut[1263]: *** Including module: fstab-sys ***
Feb 01 14:17:07 np0005604375.novalocal dracut[1263]: *** Including module: rootfs-block ***
Feb 01 14:17:07 np0005604375.novalocal dracut[1263]: *** Including module: terminfo ***
Feb 01 14:17:07 np0005604375.novalocal dracut[1263]: *** Including module: udev-rules ***
Feb 01 14:17:07 np0005604375.novalocal chronyd[800]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Feb 01 14:17:07 np0005604375.novalocal chronyd[800]: System clock wrong by 1.223169 seconds
Feb 01 14:17:08 np0005604375.novalocal chronyd[800]: System clock was stepped by 1.223169 seconds
Feb 01 14:17:08 np0005604375.novalocal chronyd[800]: System clock TAI offset set to 37 seconds
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]: Skipping udev rule: 91-permissions.rules
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]: Skipping udev rule: 80-drivers-modprobe.rules
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]: *** Including module: virtiofs ***
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]: *** Including module: dracut-systemd ***
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]: *** Including module: usrmount ***
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]: *** Including module: base ***
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]: *** Including module: fs-lib ***
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]: *** Including module: kdumpbase ***
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]: *** Including module: microcode_ctl-fw_dir_override ***
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]:   microcode_ctl module: mangling fw_dir
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]:     microcode_ctl: configuration "intel" is ignored
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]: *** Including module: openssl ***
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]: *** Including module: shutdown ***
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]: *** Including module: squash ***
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]: *** Including modules done ***
Feb 01 14:17:09 np0005604375.novalocal dracut[1263]: *** Installing kernel module dependencies ***
Feb 01 14:17:10 np0005604375.novalocal dracut[1263]: *** Installing kernel module dependencies done ***
Feb 01 14:17:10 np0005604375.novalocal dracut[1263]: *** Resolving executable dependencies ***
Feb 01 14:17:11 np0005604375.novalocal dracut[1263]: *** Resolving executable dependencies done ***
Feb 01 14:17:11 np0005604375.novalocal dracut[1263]: *** Generating early-microcode cpio image ***
Feb 01 14:17:11 np0005604375.novalocal dracut[1263]: *** Store current command line parameters ***
Feb 01 14:17:11 np0005604375.novalocal dracut[1263]: Stored kernel commandline:
Feb 01 14:17:11 np0005604375.novalocal dracut[1263]: No dracut internal kernel commandline stored in the initramfs
Feb 01 14:17:11 np0005604375.novalocal dracut[1263]: *** Install squash loader ***
Feb 01 14:17:12 np0005604375.novalocal dracut[1263]: *** Squashing the files inside the initramfs ***
Feb 01 14:17:12 np0005604375.novalocal irqbalance[781]: Cannot change IRQ 25 affinity: Operation not permitted
Feb 01 14:17:12 np0005604375.novalocal irqbalance[781]: IRQ 25 affinity is now unmanaged
Feb 01 14:17:12 np0005604375.novalocal irqbalance[781]: Cannot change IRQ 31 affinity: Operation not permitted
Feb 01 14:17:12 np0005604375.novalocal irqbalance[781]: IRQ 31 affinity is now unmanaged
Feb 01 14:17:12 np0005604375.novalocal irqbalance[781]: Cannot change IRQ 28 affinity: Operation not permitted
Feb 01 14:17:12 np0005604375.novalocal irqbalance[781]: IRQ 28 affinity is now unmanaged
Feb 01 14:17:12 np0005604375.novalocal irqbalance[781]: Cannot change IRQ 32 affinity: Operation not permitted
Feb 01 14:17:12 np0005604375.novalocal irqbalance[781]: IRQ 32 affinity is now unmanaged
Feb 01 14:17:12 np0005604375.novalocal irqbalance[781]: Cannot change IRQ 30 affinity: Operation not permitted
Feb 01 14:17:12 np0005604375.novalocal irqbalance[781]: IRQ 30 affinity is now unmanaged
Feb 01 14:17:12 np0005604375.novalocal irqbalance[781]: Cannot change IRQ 29 affinity: Operation not permitted
Feb 01 14:17:12 np0005604375.novalocal irqbalance[781]: IRQ 29 affinity is now unmanaged
Feb 01 14:17:13 np0005604375.novalocal dracut[1263]: *** Squashing the files inside the initramfs done ***
Feb 01 14:17:13 np0005604375.novalocal dracut[1263]: *** Creating image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' ***
Feb 01 14:17:13 np0005604375.novalocal dracut[1263]: *** Hardlinking files ***
Feb 01 14:17:13 np0005604375.novalocal dracut[1263]: Mode:           real
Feb 01 14:17:13 np0005604375.novalocal dracut[1263]: Files:          50
Feb 01 14:17:13 np0005604375.novalocal dracut[1263]: Linked:         0 files
Feb 01 14:17:13 np0005604375.novalocal dracut[1263]: Compared:       0 xattrs
Feb 01 14:17:13 np0005604375.novalocal dracut[1263]: Compared:       0 files
Feb 01 14:17:13 np0005604375.novalocal dracut[1263]: Saved:          0 B
Feb 01 14:17:13 np0005604375.novalocal dracut[1263]: Duration:       0.000330 seconds
Feb 01 14:17:13 np0005604375.novalocal dracut[1263]: *** Hardlinking files done ***
Feb 01 14:17:13 np0005604375.novalocal dracut[1263]: *** Creating initramfs image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' done ***
Feb 01 14:17:13 np0005604375.novalocal kdumpctl[1010]: kdump: kexec: loaded kdump kernel
Feb 01 14:17:13 np0005604375.novalocal kdumpctl[1010]: kdump: Starting kdump: [OK]
Feb 01 14:17:13 np0005604375.novalocal systemd[1]: Finished Crash recovery kernel arming.
Feb 01 14:17:13 np0005604375.novalocal systemd[1]: Startup finished in 1.206s (kernel) + 2.041s (initrd) + 12.548s (userspace) = 15.797s.
Feb 01 14:17:13 np0005604375.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 01 14:17:19 np0005604375.novalocal sshd-session[4298]: Accepted publickey for zuul from 38.102.83.114 port 37164 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Feb 01 14:17:19 np0005604375.novalocal systemd[1]: Created slice User Slice of UID 1000.
Feb 01 14:17:19 np0005604375.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Feb 01 14:17:19 np0005604375.novalocal systemd-logind[786]: New session 1 of user zuul.
Feb 01 14:17:19 np0005604375.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Feb 01 14:17:19 np0005604375.novalocal systemd[1]: Starting User Manager for UID 1000...
Feb 01 14:17:19 np0005604375.novalocal systemd[4302]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:17:19 np0005604375.novalocal systemd[4302]: Queued start job for default target Main User Target.
Feb 01 14:17:19 np0005604375.novalocal systemd[4302]: Created slice User Application Slice.
Feb 01 14:17:19 np0005604375.novalocal systemd[4302]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 01 14:17:19 np0005604375.novalocal systemd[4302]: Started Daily Cleanup of User's Temporary Directories.
Feb 01 14:17:19 np0005604375.novalocal systemd[4302]: Reached target Paths.
Feb 01 14:17:19 np0005604375.novalocal systemd[4302]: Reached target Timers.
Feb 01 14:17:19 np0005604375.novalocal systemd[4302]: Starting D-Bus User Message Bus Socket...
Feb 01 14:17:19 np0005604375.novalocal systemd[4302]: Starting Create User's Volatile Files and Directories...
Feb 01 14:17:19 np0005604375.novalocal systemd[4302]: Finished Create User's Volatile Files and Directories.
Feb 01 14:17:19 np0005604375.novalocal systemd[4302]: Listening on D-Bus User Message Bus Socket.
Feb 01 14:17:19 np0005604375.novalocal systemd[4302]: Reached target Sockets.
Feb 01 14:17:19 np0005604375.novalocal systemd[4302]: Reached target Basic System.
Feb 01 14:17:19 np0005604375.novalocal systemd[1]: Started User Manager for UID 1000.
Feb 01 14:17:19 np0005604375.novalocal systemd[4302]: Reached target Main User Target.
Feb 01 14:17:19 np0005604375.novalocal systemd[4302]: Startup finished in 123ms.
Feb 01 14:17:19 np0005604375.novalocal systemd[1]: Started Session 1 of User zuul.
Feb 01 14:17:19 np0005604375.novalocal sshd-session[4298]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:17:20 np0005604375.novalocal python3[4384]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:17:23 np0005604375.novalocal python3[4412]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:17:28 np0005604375.novalocal python3[4470]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:17:29 np0005604375.novalocal python3[4510]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Feb 01 14:17:31 np0005604375.novalocal python3[4536]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDOg3D/C5sT0sUANmCP2WkPymn7Ec8kER6Qfmso1GaCssVviPENWHfurW4D/9FZnZxpW6/BcjPRXXGGkqaEWbPYfCwONRlQsSb5sPPGoHZ4koyH23+e2Za22LNnaoq3YtLLTgB7UpJSnChaaRjquVHY5RvjfoxypufjOgc7RGV37rrZwTyu1e1Xjb8BKMzDgUy1GBMRMdGjz43DCGk20+T90IVXCtMaSkJuNAjiERMJBH0jhBo7wmJfpcL5ox8OQwV1yMsGjCVKxlTDeuVV18TEjxT/r6sKv1WbDNByANT6DZAAXl/d3JWo/+WLpl77QewiHt7s106MkLLeAWW8DnODSe5HkBfj5uqA8OowP81OV9abJBFhbtfrkjBvuxfkpNVezDbFW0NkJD1qemdFriQJwP9u4pQycLlhkIFjdc2uwFWWoxHsQmshHn9SXhJ8B5hEGRC+C+BQLXtEBNeMFJOIIzbv/Np1NMkVed/R/CUyryMVRpQqcIJdzuTOumrr6U= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:32 np0005604375.novalocal python3[4560]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:17:32 np0005604375.novalocal python3[4659]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:17:33 np0005604375.novalocal python3[4730]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769955452.5373676-207-276159614298051/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=275cd86534264dd4b986e9685221be1c_id_rsa follow=False checksum=93121fb72603a63f689221ec5db13b84048b12b5 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:17:33 np0005604375.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 01 14:17:33 np0005604375.novalocal python3[4855]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:17:34 np0005604375.novalocal python3[4926]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769955453.49118-240-130840043284461/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=275cd86534264dd4b986e9685221be1c_id_rsa.pub follow=False checksum=9da6b0c6916c9c03e8a5858dd9e4da44fef378ad backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:17:35 np0005604375.novalocal python3[4974]: ansible-ping Invoked with data=pong
Feb 01 14:17:36 np0005604375.novalocal python3[4998]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:17:38 np0005604375.novalocal python3[5056]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Feb 01 14:17:39 np0005604375.novalocal python3[5088]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:17:39 np0005604375.novalocal python3[5112]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:17:39 np0005604375.novalocal python3[5136]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:17:39 np0005604375.novalocal python3[5160]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:17:40 np0005604375.novalocal python3[5184]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:17:40 np0005604375.novalocal python3[5208]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:17:41 np0005604375.novalocal sudo[5232]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydtapqedxoriliwnvzjdlsdowxgyxcko ; /usr/bin/python3'
Feb 01 14:17:41 np0005604375.novalocal sudo[5232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:17:41 np0005604375.novalocal python3[5234]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:17:41 np0005604375.novalocal sudo[5232]: pam_unix(sudo:session): session closed for user root
Feb 01 14:17:42 np0005604375.novalocal sudo[5310]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plbigpmtlznypdtakcjegxidcwvkztka ; /usr/bin/python3'
Feb 01 14:17:42 np0005604375.novalocal sudo[5310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:17:42 np0005604375.novalocal python3[5312]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:17:42 np0005604375.novalocal sudo[5310]: pam_unix(sudo:session): session closed for user root
Feb 01 14:17:42 np0005604375.novalocal sudo[5383]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wowhppohzgmxeqpdjgmwfirqvbcennuv ; /usr/bin/python3'
Feb 01 14:17:42 np0005604375.novalocal sudo[5383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:17:42 np0005604375.novalocal python3[5385]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769955461.9600563-21-50831400371360/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:17:42 np0005604375.novalocal sudo[5383]: pam_unix(sudo:session): session closed for user root
Feb 01 14:17:43 np0005604375.novalocal python3[5433]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:43 np0005604375.novalocal python3[5457]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:43 np0005604375.novalocal python3[5481]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:44 np0005604375.novalocal python3[5505]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:44 np0005604375.novalocal python3[5529]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:44 np0005604375.novalocal python3[5553]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:44 np0005604375.novalocal python3[5577]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:45 np0005604375.novalocal python3[5601]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:45 np0005604375.novalocal python3[5625]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:47 np0005604375.novalocal python3[5649]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:47 np0005604375.novalocal python3[5673]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:47 np0005604375.novalocal python3[5697]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:48 np0005604375.novalocal python3[5721]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:48 np0005604375.novalocal python3[5745]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:48 np0005604375.novalocal python3[5769]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:48 np0005604375.novalocal python3[5793]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:49 np0005604375.novalocal python3[5817]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:49 np0005604375.novalocal python3[5841]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:49 np0005604375.novalocal python3[5865]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:49 np0005604375.novalocal python3[5889]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:50 np0005604375.novalocal python3[5913]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:50 np0005604375.novalocal python3[5937]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:50 np0005604375.novalocal python3[5961]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:50 np0005604375.novalocal python3[5985]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:51 np0005604375.novalocal python3[6009]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:51 np0005604375.novalocal python3[6033]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:17:52 np0005604375.novalocal sudo[6057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnrskccolvovuqgyhlbklzzunphhckbd ; /usr/bin/python3'
Feb 01 14:17:52 np0005604375.novalocal sudo[6057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:17:53 np0005604375.novalocal python3[6059]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb 01 14:17:53 np0005604375.novalocal systemd[1]: Starting Time & Date Service...
Feb 01 14:17:53 np0005604375.novalocal systemd[1]: Started Time & Date Service.
Feb 01 14:17:53 np0005604375.novalocal systemd-timedated[6061]: Changed time zone to 'UTC' (UTC).
Feb 01 14:17:53 np0005604375.novalocal sudo[6057]: pam_unix(sudo:session): session closed for user root
Feb 01 14:17:54 np0005604375.novalocal sudo[6088]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjwymisakhzdlhijhozimdorfhyryswc ; /usr/bin/python3'
Feb 01 14:17:54 np0005604375.novalocal sudo[6088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:17:54 np0005604375.novalocal python3[6090]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:17:54 np0005604375.novalocal sudo[6088]: pam_unix(sudo:session): session closed for user root
Feb 01 14:17:55 np0005604375.novalocal python3[6166]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:17:55 np0005604375.novalocal python3[6237]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769955474.8666103-153-65205997949975/source _original_basename=tmpctrve2he follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:17:56 np0005604375.novalocal python3[6337]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:17:56 np0005604375.novalocal python3[6408]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769955475.792549-183-151614907174427/source _original_basename=tmp3f3ih5xm follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:17:57 np0005604375.novalocal sudo[6508]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdzfxgyvzjsadfoujactdqcxemsftdyr ; /usr/bin/python3'
Feb 01 14:17:57 np0005604375.novalocal sudo[6508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:17:57 np0005604375.novalocal python3[6510]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:17:57 np0005604375.novalocal sudo[6508]: pam_unix(sudo:session): session closed for user root
Feb 01 14:17:57 np0005604375.novalocal sudo[6581]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sccklqpkikrumivhxitzwvhpnwwnvolr ; /usr/bin/python3'
Feb 01 14:17:57 np0005604375.novalocal sudo[6581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:17:57 np0005604375.novalocal python3[6583]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769955476.9317107-231-197640845934383/source _original_basename=tmpp8f8afke follow=False checksum=315d925a1c7d27b381f3cae1546bdf6d57bfb104 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:17:57 np0005604375.novalocal sudo[6581]: pam_unix(sudo:session): session closed for user root
Feb 01 14:17:58 np0005604375.novalocal python3[6631]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:17:58 np0005604375.novalocal python3[6657]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:17:58 np0005604375.novalocal sudo[6735]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsrxegfwtdrpmfikrxisaebkoktubutl ; /usr/bin/python3'
Feb 01 14:17:58 np0005604375.novalocal sudo[6735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:17:58 np0005604375.novalocal python3[6737]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:17:58 np0005604375.novalocal sudo[6735]: pam_unix(sudo:session): session closed for user root
Feb 01 14:17:58 np0005604375.novalocal sudo[6808]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpsvytjtdjlkvbpbsvylvfsgsfebrdti ; /usr/bin/python3'
Feb 01 14:17:58 np0005604375.novalocal sudo[6808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:17:59 np0005604375.novalocal python3[6810]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769955478.5363784-273-202277195219368/source _original_basename=tmp7jmm5fn9 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:17:59 np0005604375.novalocal sudo[6808]: pam_unix(sudo:session): session closed for user root
Feb 01 14:17:59 np0005604375.novalocal sudo[6859]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bykayhjtqijrpntpsshwdiorpipcwbey ; /usr/bin/python3'
Feb 01 14:17:59 np0005604375.novalocal sudo[6859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:17:59 np0005604375.novalocal python3[6861]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-2942-7cf9-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:17:59 np0005604375.novalocal sudo[6859]: pam_unix(sudo:session): session closed for user root
Feb 01 14:18:00 np0005604375.novalocal python3[6889]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-2942-7cf9-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Feb 01 14:18:01 np0005604375.novalocal python3[6917]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:18:18 np0005604375.novalocal sudo[6941]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmgjkbqadqkgjugfhiciogyaxuymsooe ; /usr/bin/python3'
Feb 01 14:18:18 np0005604375.novalocal sudo[6941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:18:19 np0005604375.novalocal python3[6943]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:18:19 np0005604375.novalocal sudo[6941]: pam_unix(sudo:session): session closed for user root
Feb 01 14:18:23 np0005604375.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 01 14:18:52 np0005604375.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb 01 14:18:52 np0005604375.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Feb 01 14:18:52 np0005604375.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Feb 01 14:18:52 np0005604375.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Feb 01 14:18:52 np0005604375.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Feb 01 14:18:52 np0005604375.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Feb 01 14:18:52 np0005604375.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Feb 01 14:18:52 np0005604375.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Feb 01 14:18:52 np0005604375.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Feb 01 14:18:52 np0005604375.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Feb 01 14:18:52 np0005604375.novalocal NetworkManager[855]: <info>  [1769955532.2310] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 01 14:18:52 np0005604375.novalocal systemd-udevd[6947]: Network interface NamePolicy= disabled on kernel command line.
Feb 01 14:18:52 np0005604375.novalocal NetworkManager[855]: <info>  [1769955532.2474] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 01 14:18:52 np0005604375.novalocal NetworkManager[855]: <info>  [1769955532.2499] settings: (eth1): created default wired connection 'Wired connection 1'
Feb 01 14:18:52 np0005604375.novalocal NetworkManager[855]: <info>  [1769955532.2501] device (eth1): carrier: link connected
Feb 01 14:18:52 np0005604375.novalocal NetworkManager[855]: <info>  [1769955532.2502] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb 01 14:18:52 np0005604375.novalocal NetworkManager[855]: <info>  [1769955532.2506] policy: auto-activating connection 'Wired connection 1' (91277a2e-344e-3388-a112-2b38838ac4e5)
Feb 01 14:18:52 np0005604375.novalocal NetworkManager[855]: <info>  [1769955532.2509] device (eth1): Activation: starting connection 'Wired connection 1' (91277a2e-344e-3388-a112-2b38838ac4e5)
Feb 01 14:18:52 np0005604375.novalocal NetworkManager[855]: <info>  [1769955532.2510] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 01 14:18:52 np0005604375.novalocal NetworkManager[855]: <info>  [1769955532.2511] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 01 14:18:52 np0005604375.novalocal NetworkManager[855]: <info>  [1769955532.2514] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 01 14:18:52 np0005604375.novalocal NetworkManager[855]: <info>  [1769955532.2516] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 01 14:18:53 np0005604375.novalocal python3[6973]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-6553-8f61-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:19:03 np0005604375.novalocal sudo[7051]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhjyctqiwepzeuiciwuxnpcclqfztcxy ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 01 14:19:03 np0005604375.novalocal sudo[7051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:19:03 np0005604375.novalocal python3[7053]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:19:03 np0005604375.novalocal sudo[7051]: pam_unix(sudo:session): session closed for user root
Feb 01 14:19:03 np0005604375.novalocal sudo[7124]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guyhweavrbwtvwfsiloayldtvyfzvpkk ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 01 14:19:03 np0005604375.novalocal sudo[7124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:19:03 np0005604375.novalocal python3[7126]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769955543.0591838-102-54971321909273/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=3be24a5af914606cc74cafdf80f44ef63ee45ba0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:19:03 np0005604375.novalocal sudo[7124]: pam_unix(sudo:session): session closed for user root
Feb 01 14:19:04 np0005604375.novalocal sudo[7174]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chgdxohladabknllsnhqkldliiivbmgv ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 01 14:19:04 np0005604375.novalocal sudo[7174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:19:04 np0005604375.novalocal python3[7176]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 01 14:19:04 np0005604375.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb 01 14:19:04 np0005604375.novalocal systemd[1]: Stopped Network Manager Wait Online.
Feb 01 14:19:04 np0005604375.novalocal systemd[1]: Stopping Network Manager Wait Online...
Feb 01 14:19:04 np0005604375.novalocal systemd[1]: Stopping Network Manager...
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[855]: <info>  [1769955544.5064] caught SIGTERM, shutting down normally.
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[855]: <info>  [1769955544.5071] dhcp4 (eth0): canceled DHCP transaction
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[855]: <info>  [1769955544.5071] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[855]: <info>  [1769955544.5071] dhcp4 (eth0): state changed no lease
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[855]: <info>  [1769955544.5073] manager: NetworkManager state is now CONNECTING
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[855]: <info>  [1769955544.5193] dhcp4 (eth1): canceled DHCP transaction
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[855]: <info>  [1769955544.5193] dhcp4 (eth1): state changed no lease
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[855]: <info>  [1769955544.5246] exiting (success)
Feb 01 14:19:04 np0005604375.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 01 14:19:04 np0005604375.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Feb 01 14:19:04 np0005604375.novalocal systemd[1]: Stopped Network Manager.
Feb 01 14:19:04 np0005604375.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 01 14:19:04 np0005604375.novalocal systemd[1]: Starting Network Manager...
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.5573] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:bc6eed0e-afac-49e7-b313-e00c329dc99a)
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.5576] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.5611] manager[0x56372075c000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 01 14:19:04 np0005604375.novalocal systemd[1]: Starting Hostname Service...
Feb 01 14:19:04 np0005604375.novalocal systemd[1]: Started Hostname Service.
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6340] hostname: hostname: using hostnamed
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6340] hostname: static hostname changed from (none) to "np0005604375.novalocal"
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6348] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6353] manager[0x56372075c000]: rfkill: Wi-Fi hardware radio set enabled
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6353] manager[0x56372075c000]: rfkill: WWAN hardware radio set enabled
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6396] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6397] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6398] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6399] manager: Networking is enabled by state file
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6403] settings: Loaded settings plugin: keyfile (internal)
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6413] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6458] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6474] dhcp: init: Using DHCP client 'internal'
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6479] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6486] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6493] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6509] device (lo): Activation: starting connection 'lo' (993b83ea-ade5-4a5e-93d7-372f4fe03bae)
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6519] device (eth0): carrier: link connected
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6526] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6534] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6535] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6544] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6554] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6562] device (eth1): carrier: link connected
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6568] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6577] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (91277a2e-344e-3388-a112-2b38838ac4e5) (indicated)
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6577] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6585] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6596] device (eth1): Activation: starting connection 'Wired connection 1' (91277a2e-344e-3388-a112-2b38838ac4e5)
Feb 01 14:19:04 np0005604375.novalocal systemd[1]: Started Network Manager.
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6604] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6611] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6627] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6630] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6633] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6636] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6639] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6641] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6645] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6654] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6657] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6671] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6674] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6688] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6695] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6700] device (lo): Activation: successful, device activated.
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6708] dhcp4 (eth0): state changed new lease, address=38.102.83.238
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6714] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 01 14:19:04 np0005604375.novalocal systemd[1]: Starting Network Manager Wait Online...
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6792] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6815] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6817] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6822] manager: NetworkManager state is now CONNECTED_SITE
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6828] device (eth0): Activation: successful, device activated.
Feb 01 14:19:04 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955544.6837] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 01 14:19:04 np0005604375.novalocal sudo[7174]: pam_unix(sudo:session): session closed for user root
Feb 01 14:19:05 np0005604375.novalocal python3[7260]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-6553-8f61-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:19:14 np0005604375.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 01 14:19:34 np0005604375.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 01 14:19:49 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955589.8673] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 01 14:19:49 np0005604375.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 01 14:19:49 np0005604375.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 01 14:19:49 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955589.9018] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 01 14:19:49 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955589.9021] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 01 14:19:49 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955589.9027] device (eth1): Activation: successful, device activated.
Feb 01 14:19:49 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955589.9035] manager: startup complete
Feb 01 14:19:49 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955589.9037] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Feb 01 14:19:49 np0005604375.novalocal NetworkManager[7185]: <warn>  [1769955589.9042] device (eth1): Activation: failed for connection 'Wired connection 1'
Feb 01 14:19:49 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955589.9050] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Feb 01 14:19:49 np0005604375.novalocal systemd[1]: Finished Network Manager Wait Online.
Feb 01 14:19:49 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955589.9213] dhcp4 (eth1): canceled DHCP transaction
Feb 01 14:19:49 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955589.9215] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 01 14:19:49 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955589.9215] dhcp4 (eth1): state changed no lease
Feb 01 14:19:49 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955589.9225] policy: auto-activating connection 'ci-private-network' (98bb363c-97f6-5419-a1f6-12d0df6ca2e0)
Feb 01 14:19:49 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955589.9229] device (eth1): Activation: starting connection 'ci-private-network' (98bb363c-97f6-5419-a1f6-12d0df6ca2e0)
Feb 01 14:19:49 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955589.9230] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 01 14:19:49 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955589.9232] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 01 14:19:49 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955589.9236] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 01 14:19:49 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955589.9242] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 01 14:19:49 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955589.9271] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 01 14:19:49 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955589.9273] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 01 14:19:49 np0005604375.novalocal NetworkManager[7185]: <info>  [1769955589.9278] device (eth1): Activation: successful, device activated.
Feb 01 14:19:51 np0005604375.novalocal systemd[4302]: Starting Mark boot as successful...
Feb 01 14:19:51 np0005604375.novalocal systemd[4302]: Finished Mark boot as successful.
Feb 01 14:19:59 np0005604375.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 01 14:20:03 np0005604375.novalocal sudo[7364]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgyncqlcoeozazsrzqegnbjzdzptqfuq ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 01 14:20:03 np0005604375.novalocal sudo[7364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:20:03 np0005604375.novalocal python3[7366]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:20:03 np0005604375.novalocal sudo[7364]: pam_unix(sudo:session): session closed for user root
Feb 01 14:20:03 np0005604375.novalocal sudo[7437]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziopdnezkdvhwcovqlarpxqxhlumyric ; OS_CLOUD=vexxhost /usr/bin/python3'
Feb 01 14:20:03 np0005604375.novalocal sudo[7437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:20:03 np0005604375.novalocal python3[7439]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769955603.0981352-267-107694781614546/source _original_basename=tmpbgzi0j16 follow=False checksum=7eace079e547e1278ba77819803b9809997a2a46 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:20:03 np0005604375.novalocal sudo[7437]: pam_unix(sudo:session): session closed for user root
Feb 01 14:21:03 np0005604375.novalocal sshd-session[4311]: Received disconnect from 38.102.83.114 port 37164:11: disconnected by user
Feb 01 14:21:03 np0005604375.novalocal sshd-session[4311]: Disconnected from user zuul 38.102.83.114 port 37164
Feb 01 14:21:03 np0005604375.novalocal sshd-session[4298]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:21:03 np0005604375.novalocal systemd-logind[786]: Session 1 logged out. Waiting for processes to exit.
Feb 01 14:22:51 np0005604375.novalocal systemd[4302]: Created slice User Background Tasks Slice.
Feb 01 14:22:51 np0005604375.novalocal systemd[4302]: Starting Cleanup of User's Temporary Files and Directories...
Feb 01 14:22:51 np0005604375.novalocal systemd[4302]: Finished Cleanup of User's Temporary Files and Directories.
Feb 01 14:26:44 np0005604375.novalocal sshd-session[7469]: Accepted publickey for zuul from 38.102.83.114 port 52128 ssh2: RSA SHA256:ukhXxVC8oCSeSO9VQn4ZNf7JkO/cu/icAewGEjIjPv8
Feb 01 14:26:44 np0005604375.novalocal systemd-logind[786]: New session 3 of user zuul.
Feb 01 14:26:44 np0005604375.novalocal systemd[1]: Started Session 3 of User zuul.
Feb 01 14:26:44 np0005604375.novalocal sshd-session[7469]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:26:44 np0005604375.novalocal sudo[7496]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqqykorlkagbuunfzmbfiiipssxishez ; /usr/bin/python3'
Feb 01 14:26:44 np0005604375.novalocal sudo[7496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:26:44 np0005604375.novalocal python3[7498]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-c26f-db18-000000002167-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:26:44 np0005604375.novalocal sudo[7496]: pam_unix(sudo:session): session closed for user root
Feb 01 14:26:45 np0005604375.novalocal sudo[7524]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuzjevsjquvkkxsbcsojqmwtvumgoqjo ; /usr/bin/python3'
Feb 01 14:26:45 np0005604375.novalocal sudo[7524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:26:45 np0005604375.novalocal python3[7526]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:26:45 np0005604375.novalocal sudo[7524]: pam_unix(sudo:session): session closed for user root
Feb 01 14:26:45 np0005604375.novalocal sudo[7550]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kagtjxuuiljabqqxyxvrlygpqxpshpji ; /usr/bin/python3'
Feb 01 14:26:45 np0005604375.novalocal sudo[7550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:26:45 np0005604375.novalocal python3[7552]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:26:45 np0005604375.novalocal sudo[7550]: pam_unix(sudo:session): session closed for user root
Feb 01 14:26:45 np0005604375.novalocal sudo[7577]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikvfqwunrkkuscjcohqjackurbsscdsl ; /usr/bin/python3'
Feb 01 14:26:45 np0005604375.novalocal sudo[7577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:26:45 np0005604375.novalocal python3[7579]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:26:45 np0005604375.novalocal sudo[7577]: pam_unix(sudo:session): session closed for user root
Feb 01 14:26:45 np0005604375.novalocal sudo[7603]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuvinyhgkezqpmisphrifadlovgehamu ; /usr/bin/python3'
Feb 01 14:26:45 np0005604375.novalocal sudo[7603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:26:45 np0005604375.novalocal python3[7605]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:26:45 np0005604375.novalocal sudo[7603]: pam_unix(sudo:session): session closed for user root
Feb 01 14:26:46 np0005604375.novalocal sudo[7629]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcsjxdrxlqwegiqnqmaowinaqrrgotrg ; /usr/bin/python3'
Feb 01 14:26:46 np0005604375.novalocal sudo[7629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:26:46 np0005604375.novalocal python3[7631]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:26:46 np0005604375.novalocal sudo[7629]: pam_unix(sudo:session): session closed for user root
Feb 01 14:26:46 np0005604375.novalocal sudo[7707]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reibxrpffwbmwapdynyrjkpzkibdrjnq ; /usr/bin/python3'
Feb 01 14:26:46 np0005604375.novalocal sudo[7707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:26:47 np0005604375.novalocal python3[7709]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:26:47 np0005604375.novalocal sudo[7707]: pam_unix(sudo:session): session closed for user root
Feb 01 14:26:47 np0005604375.novalocal sudo[7780]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkjvgogmhtojaftyavuvlhuocbqdjvul ; /usr/bin/python3'
Feb 01 14:26:47 np0005604375.novalocal sudo[7780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:26:47 np0005604375.novalocal python3[7782]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769956006.811156-494-278074563692936/source _original_basename=tmpdhrnq3pq follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:26:47 np0005604375.novalocal sudo[7780]: pam_unix(sudo:session): session closed for user root
Feb 01 14:26:48 np0005604375.novalocal sudo[7830]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdcmrbsoydfronxdjlbatrnzrhqggxwh ; /usr/bin/python3'
Feb 01 14:26:48 np0005604375.novalocal sudo[7830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:26:48 np0005604375.novalocal python3[7832]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 01 14:26:48 np0005604375.novalocal systemd[1]: Reloading.
Feb 01 14:26:48 np0005604375.novalocal systemd-rc-local-generator[7852]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:26:48 np0005604375.novalocal sudo[7830]: pam_unix(sudo:session): session closed for user root
Feb 01 14:26:49 np0005604375.novalocal sudo[7886]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwlvxeryvwqumgrqddjyufefdqypkjes ; /usr/bin/python3'
Feb 01 14:26:49 np0005604375.novalocal sudo[7886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:26:50 np0005604375.novalocal python3[7888]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Feb 01 14:26:50 np0005604375.novalocal sudo[7886]: pam_unix(sudo:session): session closed for user root
Feb 01 14:26:50 np0005604375.novalocal sudo[7912]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prqpmtlriqjlpniofsazhctfdlndrvsy ; /usr/bin/python3'
Feb 01 14:26:50 np0005604375.novalocal sudo[7912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:26:50 np0005604375.novalocal python3[7914]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:26:50 np0005604375.novalocal sudo[7912]: pam_unix(sudo:session): session closed for user root
Feb 01 14:26:50 np0005604375.novalocal sudo[7940]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgoxrnuuoxvfnvicchoomyjjcrqdmwrm ; /usr/bin/python3'
Feb 01 14:26:50 np0005604375.novalocal sudo[7940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:26:50 np0005604375.novalocal python3[7942]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:26:50 np0005604375.novalocal sudo[7940]: pam_unix(sudo:session): session closed for user root
Feb 01 14:26:50 np0005604375.novalocal sudo[7968]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsmeizqhgvasvfyzwzocgjnufvmyleul ; /usr/bin/python3'
Feb 01 14:26:50 np0005604375.novalocal sudo[7968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:26:50 np0005604375.novalocal python3[7970]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:26:50 np0005604375.novalocal sudo[7968]: pam_unix(sudo:session): session closed for user root
Feb 01 14:26:50 np0005604375.novalocal sudo[7996]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gshuuquywuertzbtarmlvekbujlcxvky ; /usr/bin/python3'
Feb 01 14:26:50 np0005604375.novalocal sudo[7996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:26:51 np0005604375.novalocal python3[7998]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:26:51 np0005604375.novalocal sudo[7996]: pam_unix(sudo:session): session closed for user root
Feb 01 14:26:51 np0005604375.novalocal python3[8025]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-c26f-db18-00000000216e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:26:52 np0005604375.novalocal python3[8055]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 01 14:26:54 np0005604375.novalocal sshd-session[7472]: Connection closed by 38.102.83.114 port 52128
Feb 01 14:26:54 np0005604375.novalocal sshd-session[7469]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:26:54 np0005604375.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Feb 01 14:26:54 np0005604375.novalocal systemd[1]: session-3.scope: Consumed 3.477s CPU time.
Feb 01 14:26:54 np0005604375.novalocal systemd-logind[786]: Session 3 logged out. Waiting for processes to exit.
Feb 01 14:26:54 np0005604375.novalocal systemd-logind[786]: Removed session 3.
Feb 01 14:26:55 np0005604375.novalocal sshd-session[8059]: Accepted publickey for zuul from 38.102.83.114 port 38084 ssh2: RSA SHA256:ukhXxVC8oCSeSO9VQn4ZNf7JkO/cu/icAewGEjIjPv8
Feb 01 14:26:55 np0005604375.novalocal systemd-logind[786]: New session 4 of user zuul.
Feb 01 14:26:55 np0005604375.novalocal systemd[1]: Started Session 4 of User zuul.
Feb 01 14:26:55 np0005604375.novalocal sshd-session[8059]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:26:55 np0005604375.novalocal sudo[8086]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcttvbkzreipnrhzyxfyhrrltvlzhaiu ; /usr/bin/python3'
Feb 01 14:26:55 np0005604375.novalocal sudo[8086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:26:55 np0005604375.novalocal python3[8088]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 01 14:27:01 np0005604375.novalocal setsebool[8127]: The virt_use_nfs policy boolean was changed to 1 by root
Feb 01 14:27:01 np0005604375.novalocal setsebool[8127]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Feb 01 14:27:11 np0005604375.novalocal kernel: SELinux:  Converting 385 SID table entries...
Feb 01 14:27:11 np0005604375.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Feb 01 14:27:11 np0005604375.novalocal kernel: SELinux:  policy capability open_perms=1
Feb 01 14:27:11 np0005604375.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Feb 01 14:27:11 np0005604375.novalocal kernel: SELinux:  policy capability always_check_network=0
Feb 01 14:27:11 np0005604375.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 01 14:27:11 np0005604375.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 01 14:27:11 np0005604375.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 01 14:27:20 np0005604375.novalocal kernel: SELinux:  Converting 388 SID table entries...
Feb 01 14:27:20 np0005604375.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Feb 01 14:27:20 np0005604375.novalocal kernel: SELinux:  policy capability open_perms=1
Feb 01 14:27:20 np0005604375.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Feb 01 14:27:20 np0005604375.novalocal kernel: SELinux:  policy capability always_check_network=0
Feb 01 14:27:20 np0005604375.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 01 14:27:20 np0005604375.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 01 14:27:20 np0005604375.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 01 14:27:37 np0005604375.novalocal dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb 01 14:27:37 np0005604375.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 01 14:27:37 np0005604375.novalocal systemd[1]: Starting man-db-cache-update.service...
Feb 01 14:27:37 np0005604375.novalocal systemd[1]: Reloading.
Feb 01 14:27:37 np0005604375.novalocal systemd-rc-local-generator[8895]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:27:37 np0005604375.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Feb 01 14:27:38 np0005604375.novalocal sudo[8086]: pam_unix(sudo:session): session closed for user root
Feb 01 14:27:48 np0005604375.novalocal python3[17083]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-8732-6ace-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:27:49 np0005604375.novalocal kernel: evm: overlay not supported
Feb 01 14:27:49 np0005604375.novalocal systemd[4302]: Starting D-Bus User Message Bus...
Feb 01 14:27:49 np0005604375.novalocal dbus-broker-launch[17649]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Feb 01 14:27:49 np0005604375.novalocal dbus-broker-launch[17649]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Feb 01 14:27:49 np0005604375.novalocal systemd[4302]: Started D-Bus User Message Bus.
Feb 01 14:27:49 np0005604375.novalocal dbus-broker-lau[17649]: Ready
Feb 01 14:27:49 np0005604375.novalocal systemd[4302]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb 01 14:27:49 np0005604375.novalocal systemd[4302]: Created slice Slice /user.
Feb 01 14:27:49 np0005604375.novalocal systemd[4302]: podman-17581.scope: unit configures an IP firewall, but not running as root.
Feb 01 14:27:49 np0005604375.novalocal systemd[4302]: (This warning is only shown for the first unit using IP firewalling.)
Feb 01 14:27:49 np0005604375.novalocal systemd[4302]: Started podman-17581.scope.
Feb 01 14:27:49 np0005604375.novalocal systemd[4302]: Started podman-pause-3e251e21.scope.
Feb 01 14:27:50 np0005604375.novalocal sudo[18054]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjeeaiuosgyflkooopyfwzsntbnuhsii ; /usr/bin/python3'
Feb 01 14:27:50 np0005604375.novalocal sudo[18054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:27:50 np0005604375.novalocal python3[18060]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.219:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.219:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:27:50 np0005604375.novalocal python3[18060]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Feb 01 14:27:50 np0005604375.novalocal sudo[18054]: pam_unix(sudo:session): session closed for user root
Feb 01 14:27:50 np0005604375.novalocal sshd-session[8062]: Connection closed by 38.102.83.114 port 38084
Feb 01 14:27:50 np0005604375.novalocal sshd-session[8059]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:27:50 np0005604375.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Feb 01 14:27:50 np0005604375.novalocal systemd[1]: session-4.scope: Consumed 39.262s CPU time.
Feb 01 14:27:50 np0005604375.novalocal systemd-logind[786]: Session 4 logged out. Waiting for processes to exit.
Feb 01 14:27:50 np0005604375.novalocal systemd-logind[786]: Removed session 4.
Feb 01 14:28:08 np0005604375.novalocal sshd-session[28080]: Unable to negotiate with 38.102.83.245 port 36444: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Feb 01 14:28:08 np0005604375.novalocal sshd-session[28081]: Unable to negotiate with 38.102.83.245 port 36438: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Feb 01 14:28:08 np0005604375.novalocal sshd-session[28085]: Connection closed by 38.102.83.245 port 36428 [preauth]
Feb 01 14:28:08 np0005604375.novalocal sshd-session[28083]: Connection closed by 38.102.83.245 port 36434 [preauth]
Feb 01 14:28:08 np0005604375.novalocal sshd-session[28084]: Unable to negotiate with 38.102.83.245 port 36442: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Feb 01 14:28:11 np0005604375.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 01 14:28:11 np0005604375.novalocal systemd[1]: Finished man-db-cache-update.service.
Feb 01 14:28:11 np0005604375.novalocal systemd[1]: man-db-cache-update.service: Consumed 38.715s CPU time.
Feb 01 14:28:11 np0005604375.novalocal systemd[1]: run-rfa7bfb4df0e24838b3ba88efed88c531.service: Deactivated successfully.
Feb 01 14:28:12 np0005604375.novalocal sshd-session[29639]: Accepted publickey for zuul from 38.102.83.114 port 40332 ssh2: RSA SHA256:ukhXxVC8oCSeSO9VQn4ZNf7JkO/cu/icAewGEjIjPv8
Feb 01 14:28:12 np0005604375.novalocal systemd-logind[786]: New session 5 of user zuul.
Feb 01 14:28:12 np0005604375.novalocal systemd[1]: Started Session 5 of User zuul.
Feb 01 14:28:12 np0005604375.novalocal sshd-session[29639]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:28:12 np0005604375.novalocal python3[29666]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJTEnvHfA3HfXJBZL6COftw7wlOkNG3L9xY8it+Bi82MvcOrDXYPdlkNOv7Dds48b4NNwxcMKPs0qLhYP0ww/mQ= zuul@np0005604374.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:28:12 np0005604375.novalocal sudo[29690]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zosiwxwdywfvvlqgcayrywgbechiuzyi ; /usr/bin/python3'
Feb 01 14:28:12 np0005604375.novalocal sudo[29690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:28:12 np0005604375.novalocal python3[29692]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJTEnvHfA3HfXJBZL6COftw7wlOkNG3L9xY8it+Bi82MvcOrDXYPdlkNOv7Dds48b4NNwxcMKPs0qLhYP0ww/mQ= zuul@np0005604374.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:28:12 np0005604375.novalocal sudo[29690]: pam_unix(sudo:session): session closed for user root
Feb 01 14:28:12 np0005604375.novalocal irqbalance[781]: Cannot change IRQ 27 affinity: Operation not permitted
Feb 01 14:28:12 np0005604375.novalocal irqbalance[781]: IRQ 27 affinity is now unmanaged
Feb 01 14:28:13 np0005604375.novalocal sudo[29716]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqjawttuxyfqfudylolswdzcrktvnxen ; /usr/bin/python3'
Feb 01 14:28:13 np0005604375.novalocal sudo[29716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:28:13 np0005604375.novalocal python3[29718]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005604375.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Feb 01 14:28:13 np0005604375.novalocal useradd[29720]: new group: name=cloud-admin, GID=1002
Feb 01 14:28:13 np0005604375.novalocal useradd[29720]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Feb 01 14:28:13 np0005604375.novalocal sudo[29716]: pam_unix(sudo:session): session closed for user root
Feb 01 14:28:13 np0005604375.novalocal sudo[29750]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeekeesdibgjvesddjokrxfjujbmiurn ; /usr/bin/python3'
Feb 01 14:28:13 np0005604375.novalocal sudo[29750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:28:13 np0005604375.novalocal python3[29752]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJTEnvHfA3HfXJBZL6COftw7wlOkNG3L9xY8it+Bi82MvcOrDXYPdlkNOv7Dds48b4NNwxcMKPs0qLhYP0ww/mQ= zuul@np0005604374.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 01 14:28:13 np0005604375.novalocal sudo[29750]: pam_unix(sudo:session): session closed for user root
Feb 01 14:28:14 np0005604375.novalocal sudo[29828]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrrjeslodmmnuindzkalntylwkopfiuv ; /usr/bin/python3'
Feb 01 14:28:14 np0005604375.novalocal sudo[29828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:28:14 np0005604375.novalocal python3[29830]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:28:14 np0005604375.novalocal sudo[29828]: pam_unix(sudo:session): session closed for user root
Feb 01 14:28:14 np0005604375.novalocal sudo[29901]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efjjdvrqogwdefaqwewnhjmdclwmdbfq ; /usr/bin/python3'
Feb 01 14:28:14 np0005604375.novalocal sudo[29901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:28:14 np0005604375.novalocal python3[29903]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769956093.996258-135-218452704060140/source _original_basename=tmpjfnzqhjg follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:28:14 np0005604375.novalocal sudo[29901]: pam_unix(sudo:session): session closed for user root
Feb 01 14:28:15 np0005604375.novalocal sudo[29951]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmuilqszknydxnatezrstdjzvlkihjqy ; /usr/bin/python3'
Feb 01 14:28:15 np0005604375.novalocal sudo[29951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:28:15 np0005604375.novalocal python3[29953]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Feb 01 14:28:15 np0005604375.novalocal systemd[1]: Starting Hostname Service...
Feb 01 14:28:15 np0005604375.novalocal systemd[1]: Started Hostname Service.
Feb 01 14:28:15 np0005604375.novalocal systemd-hostnamed[29957]: Changed pretty hostname to 'compute-0'
Feb 01 14:28:15 compute-0 systemd-hostnamed[29957]: Hostname set to <compute-0> (static)
Feb 01 14:28:15 compute-0 NetworkManager[7185]: <info>  [1769956095.5706] hostname: static hostname changed from "np0005604375.novalocal" to "compute-0"
Feb 01 14:28:15 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 01 14:28:15 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 01 14:28:15 compute-0 sudo[29951]: pam_unix(sudo:session): session closed for user root
Feb 01 14:28:15 compute-0 sshd-session[29642]: Connection closed by 38.102.83.114 port 40332
Feb 01 14:28:15 compute-0 sshd-session[29639]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:28:15 compute-0 systemd[1]: session-5.scope: Deactivated successfully.
Feb 01 14:28:15 compute-0 systemd[1]: session-5.scope: Consumed 2.018s CPU time.
Feb 01 14:28:15 compute-0 systemd-logind[786]: Session 5 logged out. Waiting for processes to exit.
Feb 01 14:28:15 compute-0 systemd-logind[786]: Removed session 5.
Feb 01 14:28:25 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 01 14:28:45 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 01 14:29:17 compute-0 sshd-session[29974]: Connection closed by 80.94.92.171 port 58086
Feb 01 14:30:31 compute-0 sshd-session[29978]: error: kex_exchange_identification: read: Connection reset by peer
Feb 01 14:30:31 compute-0 sshd-session[29978]: Connection reset by 176.120.22.52 port 38133
Feb 01 14:31:37 compute-0 sshd-session[29979]: Accepted publickey for zuul from 38.102.83.245 port 49432 ssh2: RSA SHA256:ukhXxVC8oCSeSO9VQn4ZNf7JkO/cu/icAewGEjIjPv8
Feb 01 14:31:37 compute-0 systemd-logind[786]: New session 6 of user zuul.
Feb 01 14:31:37 compute-0 systemd[1]: Started Session 6 of User zuul.
Feb 01 14:31:37 compute-0 sshd-session[29979]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:31:38 compute-0 python3[30055]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:31:39 compute-0 sudo[30169]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmlkuppfqmdjzvjbnupmbjezsjnvfwfu ; /usr/bin/python3'
Feb 01 14:31:39 compute-0 sudo[30169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:31:39 compute-0 python3[30171]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:31:39 compute-0 sudo[30169]: pam_unix(sudo:session): session closed for user root
Feb 01 14:31:39 compute-0 sudo[30242]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfilcsetyozdgiqtbamxwgatmqmdphmr ; /usr/bin/python3'
Feb 01 14:31:39 compute-0 sudo[30242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:31:39 compute-0 python3[30244]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769956299.1663122-33607-139827054329208/source mode=0755 _original_basename=delorean.repo follow=False checksum=cc4ab4695da8ec58c451521a3dd2f41014af145d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:31:39 compute-0 sudo[30242]: pam_unix(sudo:session): session closed for user root
Feb 01 14:31:39 compute-0 sudo[30268]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plbnkaghvwpbqlfwpmilzoguenrjyris ; /usr/bin/python3'
Feb 01 14:31:39 compute-0 sudo[30268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:31:40 compute-0 python3[30270]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:31:40 compute-0 sudo[30268]: pam_unix(sudo:session): session closed for user root
Feb 01 14:31:40 compute-0 sudo[30341]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkrmqwghqhakntxjnzfzgathlwghdpje ; /usr/bin/python3'
Feb 01 14:31:40 compute-0 sudo[30341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:31:40 compute-0 python3[30343]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769956299.1663122-33607-139827054329208/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:31:40 compute-0 sudo[30341]: pam_unix(sudo:session): session closed for user root
Feb 01 14:31:40 compute-0 sudo[30367]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psthjoejdvrsewnrxvffdizqavmxzxav ; /usr/bin/python3'
Feb 01 14:31:40 compute-0 sudo[30367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:31:40 compute-0 python3[30369]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:31:40 compute-0 sudo[30367]: pam_unix(sudo:session): session closed for user root
Feb 01 14:31:40 compute-0 sudo[30440]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gutiobrluisgvztflokhutmrvmqzokgi ; /usr/bin/python3'
Feb 01 14:31:40 compute-0 sudo[30440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:31:40 compute-0 python3[30442]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769956299.1663122-33607-139827054329208/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:31:40 compute-0 sudo[30440]: pam_unix(sudo:session): session closed for user root
Feb 01 14:31:40 compute-0 sudo[30466]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyogekwygwyplbanghbvrqyeblkbbgfo ; /usr/bin/python3'
Feb 01 14:31:40 compute-0 sudo[30466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:31:41 compute-0 python3[30468]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:31:41 compute-0 sudo[30466]: pam_unix(sudo:session): session closed for user root
Feb 01 14:31:41 compute-0 sudo[30539]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxjldryalfbjephakyvxbsrpdxvrynnn ; /usr/bin/python3'
Feb 01 14:31:41 compute-0 sudo[30539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:31:41 compute-0 python3[30541]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769956299.1663122-33607-139827054329208/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:31:41 compute-0 sudo[30539]: pam_unix(sudo:session): session closed for user root
Feb 01 14:31:41 compute-0 sudo[30565]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtuanhmprumvbvqargkftpiumqpibfsu ; /usr/bin/python3'
Feb 01 14:31:41 compute-0 sudo[30565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:31:41 compute-0 python3[30567]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:31:41 compute-0 sudo[30565]: pam_unix(sudo:session): session closed for user root
Feb 01 14:31:41 compute-0 sudo[30638]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cofzyxgqzkpinaiauamxerhtudchdqrb ; /usr/bin/python3'
Feb 01 14:31:41 compute-0 sudo[30638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:31:41 compute-0 python3[30640]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769956299.1663122-33607-139827054329208/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:31:41 compute-0 sudo[30638]: pam_unix(sudo:session): session closed for user root
Feb 01 14:31:41 compute-0 sudo[30664]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-basivzarudkrgvtmfwjsbvnggjgcupeq ; /usr/bin/python3'
Feb 01 14:31:41 compute-0 sudo[30664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:31:41 compute-0 python3[30666]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:31:41 compute-0 sudo[30664]: pam_unix(sudo:session): session closed for user root
Feb 01 14:31:42 compute-0 sudo[30737]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcxjpnofulfywfbvcldcnlaevqyhxtpk ; /usr/bin/python3'
Feb 01 14:31:42 compute-0 sudo[30737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:31:42 compute-0 python3[30739]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769956299.1663122-33607-139827054329208/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:31:42 compute-0 sudo[30737]: pam_unix(sudo:session): session closed for user root
Feb 01 14:31:42 compute-0 sudo[30763]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gklzwlkavepvkeylocaamaydjfszvrxm ; /usr/bin/python3'
Feb 01 14:31:42 compute-0 sudo[30763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:31:42 compute-0 python3[30765]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:31:42 compute-0 sudo[30763]: pam_unix(sudo:session): session closed for user root
Feb 01 14:31:42 compute-0 sudo[30836]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prahkctjjjspbgjueaklmtngqhefkvcj ; /usr/bin/python3'
Feb 01 14:31:42 compute-0 sudo[30836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:31:42 compute-0 python3[30838]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769956299.1663122-33607-139827054329208/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=362a603578148d54e8cd25942b88d7f471cc677a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:31:42 compute-0 sudo[30836]: pam_unix(sudo:session): session closed for user root
Feb 01 14:31:44 compute-0 sshd-session[30863]: Unable to negotiate with 192.168.122.11 port 33446: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Feb 01 14:31:44 compute-0 sshd-session[30864]: Connection closed by 192.168.122.11 port 33426 [preauth]
Feb 01 14:31:44 compute-0 sshd-session[30865]: Connection closed by 192.168.122.11 port 33428 [preauth]
Feb 01 14:31:44 compute-0 sshd-session[30866]: Unable to negotiate with 192.168.122.11 port 33438: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Feb 01 14:31:44 compute-0 sshd-session[30867]: Unable to negotiate with 192.168.122.11 port 33454: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Feb 01 14:31:53 compute-0 python3[30896]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:32:41 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Feb 01 14:32:41 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Feb 01 14:32:41 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Feb 01 14:32:41 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Feb 01 14:34:06 compute-0 sshd-session[30905]: Invalid user sol from 80.94.92.171 port 33238
Feb 01 14:34:06 compute-0 sshd-session[30905]: Connection closed by invalid user sol 80.94.92.171 port 33238 [preauth]
Feb 01 14:36:52 compute-0 sshd-session[29982]: Received disconnect from 38.102.83.245 port 49432:11: disconnected by user
Feb 01 14:36:52 compute-0 sshd-session[29982]: Disconnected from user zuul 38.102.83.245 port 49432
Feb 01 14:36:52 compute-0 sshd-session[29979]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:36:52 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Feb 01 14:36:52 compute-0 systemd[1]: session-6.scope: Consumed 3.952s CPU time.
Feb 01 14:36:52 compute-0 systemd-logind[786]: Session 6 logged out. Waiting for processes to exit.
Feb 01 14:36:52 compute-0 systemd-logind[786]: Removed session 6.
Feb 01 14:37:42 compute-0 sshd-session[30907]: Invalid user ubuntu from 80.94.92.171 port 36260
Feb 01 14:37:42 compute-0 sshd-session[30907]: Connection closed by invalid user ubuntu 80.94.92.171 port 36260 [preauth]
Feb 01 14:41:20 compute-0 sshd-session[30910]: Invalid user sol from 80.94.92.171 port 39300
Feb 01 14:41:21 compute-0 sshd-session[30910]: Connection closed by invalid user sol 80.94.92.171 port 39300 [preauth]
Feb 01 14:42:39 compute-0 sshd-session[30912]: Accepted publickey for zuul from 192.168.122.30 port 53650 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:42:39 compute-0 systemd-logind[786]: New session 7 of user zuul.
Feb 01 14:42:39 compute-0 systemd[1]: Started Session 7 of User zuul.
Feb 01 14:42:39 compute-0 sshd-session[30912]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:42:40 compute-0 python3.9[31065]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:42:41 compute-0 sudo[31244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbroiujdpklerignhbzcdzwvvloqbzuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769956961.229107-27-200505970681777/AnsiballZ_command.py'
Feb 01 14:42:41 compute-0 sudo[31244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:42:41 compute-0 python3.9[31246]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:42:48 compute-0 sudo[31244]: pam_unix(sudo:session): session closed for user root
Feb 01 14:42:48 compute-0 sshd-session[30915]: Connection closed by 192.168.122.30 port 53650
Feb 01 14:42:48 compute-0 sshd-session[30912]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:42:48 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Feb 01 14:42:48 compute-0 systemd[1]: session-7.scope: Consumed 7.158s CPU time.
Feb 01 14:42:48 compute-0 systemd-logind[786]: Session 7 logged out. Waiting for processes to exit.
Feb 01 14:42:48 compute-0 systemd-logind[786]: Removed session 7.
Feb 01 14:43:04 compute-0 sshd-session[31303]: Accepted publickey for zuul from 192.168.122.30 port 45652 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:43:04 compute-0 systemd-logind[786]: New session 8 of user zuul.
Feb 01 14:43:04 compute-0 systemd[1]: Started Session 8 of User zuul.
Feb 01 14:43:04 compute-0 sshd-session[31303]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:43:05 compute-0 python3.9[31456]: ansible-ansible.legacy.ping Invoked with data=pong
Feb 01 14:43:06 compute-0 python3.9[31630]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:43:06 compute-0 sudo[31780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpofzovvngklgvpoulqrrzgpzifwtsfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769956986.4337697-40-47357269436993/AnsiballZ_command.py'
Feb 01 14:43:06 compute-0 sudo[31780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:43:07 compute-0 python3.9[31782]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:43:07 compute-0 sudo[31780]: pam_unix(sudo:session): session closed for user root
Feb 01 14:43:07 compute-0 sudo[31933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvfufjzopjvdtrmiafdhraehievpfboy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769956987.2654386-52-72586146212967/AnsiballZ_stat.py'
Feb 01 14:43:07 compute-0 sudo[31933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:43:07 compute-0 python3.9[31935]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:43:07 compute-0 sudo[31933]: pam_unix(sudo:session): session closed for user root
Feb 01 14:43:08 compute-0 sudo[32085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvnvenhsafrbgwkftthrfpznhtrrjgdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769956987.9475694-60-5899317420645/AnsiballZ_file.py'
Feb 01 14:43:08 compute-0 sudo[32085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:43:08 compute-0 python3.9[32087]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:43:08 compute-0 sudo[32085]: pam_unix(sudo:session): session closed for user root
Feb 01 14:43:08 compute-0 sudo[32237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raokirmcznzosyxjvxzorcrnfwgnqzmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769956988.7167428-68-207176494516641/AnsiballZ_stat.py'
Feb 01 14:43:08 compute-0 sudo[32237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:43:09 compute-0 python3.9[32239]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:43:09 compute-0 sudo[32237]: pam_unix(sudo:session): session closed for user root
Feb 01 14:43:09 compute-0 sudo[32360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbiojgeklscljktvsqudpkanxjjnvxxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769956988.7167428-68-207176494516641/AnsiballZ_copy.py'
Feb 01 14:43:09 compute-0 sudo[32360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:43:09 compute-0 python3.9[32362]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769956988.7167428-68-207176494516641/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:43:09 compute-0 sudo[32360]: pam_unix(sudo:session): session closed for user root
Feb 01 14:43:10 compute-0 sudo[32512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjgymssnacnnvitojlelqpbukgkehhab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769956989.962752-83-275255972496661/AnsiballZ_setup.py'
Feb 01 14:43:10 compute-0 sudo[32512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:43:10 compute-0 python3.9[32514]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:43:10 compute-0 sudo[32512]: pam_unix(sudo:session): session closed for user root
Feb 01 14:43:11 compute-0 sudo[32668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvgxxzqsutlgrllsnraihibvonipwyib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769956990.7857785-91-56718050480066/AnsiballZ_file.py'
Feb 01 14:43:11 compute-0 sudo[32668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:43:11 compute-0 python3.9[32670]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:43:11 compute-0 sudo[32668]: pam_unix(sudo:session): session closed for user root
Feb 01 14:43:11 compute-0 sudo[32820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swwyzkhjoawmhuihsygmeyocdjajojzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769956991.3704128-100-147706376944923/AnsiballZ_file.py'
Feb 01 14:43:11 compute-0 sudo[32820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:43:11 compute-0 python3.9[32822]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:43:11 compute-0 sudo[32820]: pam_unix(sudo:session): session closed for user root
Feb 01 14:43:12 compute-0 python3.9[32972]: ansible-ansible.builtin.service_facts Invoked
Feb 01 14:43:15 compute-0 python3.9[33225]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:43:15 compute-0 python3.9[33375]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:43:17 compute-0 python3.9[33529]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:43:17 compute-0 sudo[33685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psxspkfjuoywnpksdimseqjyzvuvkuab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769956997.4751408-148-72431926766061/AnsiballZ_setup.py'
Feb 01 14:43:17 compute-0 sudo[33685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:43:18 compute-0 python3.9[33687]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 01 14:43:18 compute-0 sudo[33685]: pam_unix(sudo:session): session closed for user root
Feb 01 14:43:18 compute-0 sudo[33769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiyiwgrqkoomqaajefcesdpatpicwgnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769956997.4751408-148-72431926766061/AnsiballZ_dnf.py'
Feb 01 14:43:18 compute-0 sudo[33769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:43:18 compute-0 python3.9[33771]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 14:43:59 compute-0 systemd[1]: Reloading.
Feb 01 14:43:59 compute-0 systemd-rc-local-generator[33961]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:43:59 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Feb 01 14:43:59 compute-0 systemd[1]: Reloading.
Feb 01 14:43:59 compute-0 systemd-rc-local-generator[34011]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:43:59 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Feb 01 14:43:59 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Feb 01 14:43:59 compute-0 systemd[1]: Reloading.
Feb 01 14:43:59 compute-0 systemd-rc-local-generator[34048]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:44:00 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Feb 01 14:44:00 compute-0 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Feb 01 14:44:00 compute-0 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Feb 01 14:44:00 compute-0 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Feb 01 14:44:45 compute-0 sshd-session[34255]: Invalid user sol from 80.94.92.171 port 42344
Feb 01 14:44:45 compute-0 sshd-session[34255]: Connection closed by invalid user sol 80.94.92.171 port 42344 [preauth]
Feb 01 14:44:52 compute-0 kernel: SELinux:  Converting 2726 SID table entries...
Feb 01 14:44:52 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 01 14:44:52 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 01 14:44:52 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 01 14:44:52 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 01 14:44:52 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 01 14:44:52 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 01 14:44:52 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 01 14:44:53 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Feb 01 14:44:53 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 01 14:44:53 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 01 14:44:53 compute-0 systemd[1]: Reloading.
Feb 01 14:44:53 compute-0 systemd-rc-local-generator[34372]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:44:53 compute-0 systemd[1]: Starting dnf makecache...
Feb 01 14:44:53 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 01 14:44:53 compute-0 dnf[34408]: Failed determining last makecache time.
Feb 01 14:44:53 compute-0 dnf[34408]: delorean-openstack-barbican-42b4c41831408a8e323  92 kB/s | 3.0 kB     00:00
Feb 01 14:44:53 compute-0 dnf[34408]: delorean-python-glean-642fffe0203a8ffcc2443db52 150 kB/s | 3.0 kB     00:00
Feb 01 14:44:53 compute-0 dnf[34408]: delorean-openstack-cinder-1c00d6490d88e436f26ef 145 kB/s | 3.0 kB     00:00
Feb 01 14:44:53 compute-0 dnf[34408]: delorean-python-stevedore-c4acc5639fd2329372142 144 kB/s | 3.0 kB     00:00
Feb 01 14:44:53 compute-0 sudo[33769]: pam_unix(sudo:session): session closed for user root
Feb 01 14:44:53 compute-0 dnf[34408]: delorean-python-cloudkitty-tests-tempest-783703 142 kB/s | 3.0 kB     00:00
Feb 01 14:44:53 compute-0 dnf[34408]: delorean-diskimage-builder-61b717cc45660834fe9a 165 kB/s | 3.0 kB     00:00
Feb 01 14:44:53 compute-0 dnf[34408]: delorean-openstack-nova-eaa65f0b85123a4ee343246 157 kB/s | 3.0 kB     00:00
Feb 01 14:44:53 compute-0 dnf[34408]: delorean-python-designate-tests-tempest-347fdbc 149 kB/s | 3.0 kB     00:00
Feb 01 14:44:53 compute-0 dnf[34408]: delorean-openstack-glance-1fd12c29b339f30fe823e 122 kB/s | 3.0 kB     00:00
Feb 01 14:44:53 compute-0 dnf[34408]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 120 kB/s | 3.0 kB     00:00
Feb 01 14:44:53 compute-0 dnf[34408]: delorean-openstack-manila-d783d10e75495b73866db 119 kB/s | 3.0 kB     00:00
Feb 01 14:44:53 compute-0 dnf[34408]: delorean-openstack-neutron-95cadbd379667c8520c8 128 kB/s | 3.0 kB     00:00
Feb 01 14:44:53 compute-0 dnf[34408]: delorean-openstack-octavia-5975097dd4b021385178 120 kB/s | 3.0 kB     00:00
Feb 01 14:44:53 compute-0 dnf[34408]: delorean-openstack-watcher-c014f81a8647287f6dcc 114 kB/s | 3.0 kB     00:00
Feb 01 14:44:54 compute-0 dnf[34408]: delorean-python-tcib-78032d201b02cee27e8e644c61 132 kB/s | 3.0 kB     00:00
Feb 01 14:44:54 compute-0 dnf[34408]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 124 kB/s | 3.0 kB     00:00
Feb 01 14:44:54 compute-0 dnf[34408]: delorean-openstack-swift-dc98a8463506ac520c469a 133 kB/s | 3.0 kB     00:00
Feb 01 14:44:54 compute-0 dnf[34408]: delorean-python-tempestconf-8515371b7cceebd4282 154 kB/s | 3.0 kB     00:00
Feb 01 14:44:54 compute-0 dnf[34408]: delorean-openstack-heat-ui-013accbfd179753bc3f0 145 kB/s | 3.0 kB     00:00
Feb 01 14:44:54 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 01 14:44:54 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 01 14:44:54 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.039s CPU time.
Feb 01 14:44:54 compute-0 systemd[1]: run-rc940f10d34684257864d073cb96d4272.service: Deactivated successfully.
Feb 01 14:44:54 compute-0 sudo[35306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdnvaefbbdgffeecfgttwnymrdhicwkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957093.9665015-160-220259592868371/AnsiballZ_command.py'
Feb 01 14:44:54 compute-0 sudo[35306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:44:54 compute-0 dnf[34408]: CentOS Stream 9 - BaseOS                         48 kB/s | 6.7 kB     00:00
Feb 01 14:44:54 compute-0 python3.9[35308]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:44:54 compute-0 dnf[34408]: CentOS Stream 9 - AppStream                      28 kB/s | 6.8 kB     00:00
Feb 01 14:44:54 compute-0 dnf[34408]: CentOS Stream 9 - CRB                            69 kB/s | 6.6 kB     00:00
Feb 01 14:44:55 compute-0 dnf[34408]: CentOS Stream 9 - Extras packages                32 kB/s | 7.3 kB     00:00
Feb 01 14:44:55 compute-0 dnf[34408]: dlrn-antelope-testing                            88 kB/s | 3.0 kB     00:00
Feb 01 14:44:55 compute-0 dnf[34408]: dlrn-antelope-build-deps                         91 kB/s | 3.0 kB     00:00
Feb 01 14:44:55 compute-0 dnf[34408]: centos9-rabbitmq                                 95 kB/s | 3.0 kB     00:00
Feb 01 14:44:55 compute-0 sudo[35306]: pam_unix(sudo:session): session closed for user root
Feb 01 14:44:55 compute-0 dnf[34408]: centos9-storage                                  34 kB/s | 3.0 kB     00:00
Feb 01 14:44:55 compute-0 dnf[34408]: centos9-opstools                                 48 kB/s | 3.0 kB     00:00
Feb 01 14:44:55 compute-0 dnf[34408]: NFV SIG OpenvSwitch                              33 kB/s | 3.0 kB     00:00
Feb 01 14:44:55 compute-0 dnf[34408]: repo-setup-centos-appstream                     118 kB/s | 4.4 kB     00:00
Feb 01 14:44:55 compute-0 dnf[34408]: repo-setup-centos-baseos                        174 kB/s | 3.9 kB     00:00
Feb 01 14:44:55 compute-0 dnf[34408]: repo-setup-centos-highavailability              138 kB/s | 3.9 kB     00:00
Feb 01 14:44:55 compute-0 dnf[34408]: repo-setup-centos-powertools                    205 kB/s | 4.3 kB     00:00
Feb 01 14:44:55 compute-0 dnf[34408]: Extra Packages for Enterprise Linux 9 - x86_64  166 kB/s |  30 kB     00:00
Feb 01 14:44:56 compute-0 sudo[35609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuvjdjfjzhjhmvjsiowtwuxfpzcruhcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957095.41227-168-48267643821500/AnsiballZ_selinux.py'
Feb 01 14:44:56 compute-0 sudo[35609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:44:56 compute-0 python3.9[35611]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Feb 01 14:44:56 compute-0 sudo[35609]: pam_unix(sudo:session): session closed for user root
Feb 01 14:44:56 compute-0 dnf[34408]: Metadata cache created.
Feb 01 14:44:56 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Feb 01 14:44:56 compute-0 systemd[1]: Finished dnf makecache.
Feb 01 14:44:56 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.844s CPU time.
Feb 01 14:44:57 compute-0 sudo[35762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyszksxdoeubefavnjqunrjotxzuwpth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957096.7698627-179-199853651663006/AnsiballZ_command.py'
Feb 01 14:44:57 compute-0 sudo[35762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:44:57 compute-0 python3.9[35764]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Feb 01 14:44:57 compute-0 sudo[35762]: pam_unix(sudo:session): session closed for user root
Feb 01 14:44:58 compute-0 sudo[35915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owblcltkflqfvfqnprpssaypbnbuasth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957097.942006-187-197756269682876/AnsiballZ_file.py'
Feb 01 14:44:58 compute-0 sudo[35915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:44:59 compute-0 python3.9[35917]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:44:59 compute-0 sudo[35915]: pam_unix(sudo:session): session closed for user root
Feb 01 14:44:59 compute-0 sudo[36067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wohygemcvijtkzxmaxqhpdnztpisiode ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957099.3815913-195-253264679291444/AnsiballZ_mount.py'
Feb 01 14:44:59 compute-0 sudo[36067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:00 compute-0 python3.9[36069]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Feb 01 14:45:00 compute-0 sudo[36067]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:01 compute-0 sudo[36219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpkqlojarnyayauasgvujaticnpforjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957100.7878416-223-256967428704510/AnsiballZ_file.py'
Feb 01 14:45:01 compute-0 sudo[36219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:01 compute-0 python3.9[36221]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:45:01 compute-0 sudo[36219]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:01 compute-0 sudo[36371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atgyxbrwrlhxivhmpebuqnckxkmdmgcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957101.3966815-231-273310472300441/AnsiballZ_stat.py'
Feb 01 14:45:01 compute-0 sudo[36371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:01 compute-0 python3.9[36373]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:45:01 compute-0 sudo[36371]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:02 compute-0 sudo[36494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jddoumhwhuahrktthyyzugwialkxhqmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957101.3966815-231-273310472300441/AnsiballZ_copy.py'
Feb 01 14:45:02 compute-0 sudo[36494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:02 compute-0 python3.9[36496]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957101.3966815-231-273310472300441/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aa242a09ed097a69fc2e0c42a39abd6f1899daab backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:45:02 compute-0 sudo[36494]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:02 compute-0 sudo[36646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmgkbmohryqwnbjkgkojyqodlubloknz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957102.635915-255-241951379522996/AnsiballZ_stat.py'
Feb 01 14:45:02 compute-0 sudo[36646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:04 compute-0 python3.9[36648]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:45:04 compute-0 sudo[36646]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:04 compute-0 sudo[36798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pydkxedvrkqfbukyftvirumszcjoyvyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957104.4933367-263-194518971114907/AnsiballZ_command.py'
Feb 01 14:45:04 compute-0 sudo[36798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:05 compute-0 python3.9[36800]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:45:05 compute-0 sudo[36798]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:06 compute-0 sudo[36951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plosobuwhkaxwvqbzvpttdqojfhaykxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957105.9524589-271-165224862899367/AnsiballZ_file.py'
Feb 01 14:45:06 compute-0 sudo[36951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:06 compute-0 python3.9[36953]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:45:06 compute-0 sudo[36951]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:07 compute-0 sudo[37103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjnxaxduwueywmtqmrtwbnjamsinfarf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957106.705638-282-157278264860798/AnsiballZ_getent.py'
Feb 01 14:45:07 compute-0 sudo[37103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:07 compute-0 python3.9[37105]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Feb 01 14:45:07 compute-0 sudo[37103]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:07 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 01 14:45:07 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 01 14:45:07 compute-0 sudo[37257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwfbkyuhrqyvwsfcmfvnigtvmkkdrtnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957107.4092019-290-47940146176176/AnsiballZ_group.py'
Feb 01 14:45:07 compute-0 sudo[37257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:08 compute-0 python3.9[37259]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 01 14:45:08 compute-0 groupadd[37260]: group added to /etc/group: name=qemu, GID=107
Feb 01 14:45:08 compute-0 groupadd[37260]: group added to /etc/gshadow: name=qemu
Feb 01 14:45:08 compute-0 groupadd[37260]: new group: name=qemu, GID=107
Feb 01 14:45:08 compute-0 sudo[37257]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:08 compute-0 sudo[37415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbgygtqlllzwmfszqlvrjlmdbobaxeqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957108.2254024-298-129371096394407/AnsiballZ_user.py'
Feb 01 14:45:08 compute-0 sudo[37415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:08 compute-0 python3.9[37417]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 01 14:45:08 compute-0 useradd[37419]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Feb 01 14:45:08 compute-0 sudo[37415]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:09 compute-0 sudo[37575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzrwgutdylgnguftcmpibplcipdhqaip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957109.0924995-306-76608205056177/AnsiballZ_getent.py'
Feb 01 14:45:09 compute-0 sudo[37575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:09 compute-0 python3.9[37577]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Feb 01 14:45:09 compute-0 sudo[37575]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:09 compute-0 sudo[37728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvmmkskjtdvkyjejksuqefukegffzjop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957109.6932461-314-239182627231414/AnsiballZ_group.py'
Feb 01 14:45:09 compute-0 sudo[37728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:10 compute-0 python3.9[37730]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 01 14:45:10 compute-0 groupadd[37731]: group added to /etc/group: name=hugetlbfs, GID=42477
Feb 01 14:45:10 compute-0 groupadd[37731]: group added to /etc/gshadow: name=hugetlbfs
Feb 01 14:45:10 compute-0 groupadd[37731]: new group: name=hugetlbfs, GID=42477
Feb 01 14:45:10 compute-0 sudo[37728]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:10 compute-0 sudo[37886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcdjqsagpawlvrgkmwqplqrllemnmtgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957110.3501363-323-185828980313049/AnsiballZ_file.py'
Feb 01 14:45:10 compute-0 sudo[37886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:10 compute-0 python3.9[37888]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Feb 01 14:45:10 compute-0 sudo[37886]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:11 compute-0 sudo[38038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozsjfgrzhcwycnlcvpbaesiffovvqivl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957111.1032476-334-65374483124713/AnsiballZ_dnf.py'
Feb 01 14:45:11 compute-0 sudo[38038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:11 compute-0 python3.9[38040]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 14:45:13 compute-0 sudo[38038]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:13 compute-0 sudo[38191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiebzqnjnkidlxnxyftdztrwwyhlntou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957113.130494-342-246755853694825/AnsiballZ_file.py'
Feb 01 14:45:13 compute-0 sudo[38191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:13 compute-0 python3.9[38193]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:45:13 compute-0 sudo[38191]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:13 compute-0 sudo[38343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkbqomjuwbamadjfwgnfcmkjkswmaoze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957113.634022-350-38713113703165/AnsiballZ_stat.py'
Feb 01 14:45:13 compute-0 sudo[38343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:14 compute-0 python3.9[38345]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:45:14 compute-0 sudo[38343]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:14 compute-0 sudo[38466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwrijyxholghqldxvobgvzqemcjbrbbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957113.634022-350-38713113703165/AnsiballZ_copy.py'
Feb 01 14:45:14 compute-0 sudo[38466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:14 compute-0 python3.9[38468]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957113.634022-350-38713113703165/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:45:14 compute-0 sudo[38466]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:15 compute-0 sudo[38618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xidwtvuhrxetlweoyzeussttvotnxnft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957114.8350215-365-262891043153357/AnsiballZ_systemd.py'
Feb 01 14:45:15 compute-0 sudo[38618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:15 compute-0 python3.9[38620]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 01 14:45:15 compute-0 systemd[1]: Starting Load Kernel Modules...
Feb 01 14:45:15 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 01 14:45:15 compute-0 kernel: Bridge firewalling registered
Feb 01 14:45:15 compute-0 systemd-modules-load[38624]: Inserted module 'br_netfilter'
Feb 01 14:45:15 compute-0 systemd[1]: Finished Load Kernel Modules.
Feb 01 14:45:15 compute-0 sudo[38618]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:16 compute-0 sudo[38777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffgrfospupltowiljpuxrmqnmgwwrxwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957116.0176466-373-259226007622768/AnsiballZ_stat.py'
Feb 01 14:45:16 compute-0 sudo[38777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:16 compute-0 python3.9[38779]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:45:16 compute-0 sudo[38777]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:16 compute-0 sudo[38900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnsoitwykwavqloptodrbsnhgkfhuobw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957116.0176466-373-259226007622768/AnsiballZ_copy.py'
Feb 01 14:45:16 compute-0 sudo[38900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:17 compute-0 python3.9[38902]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957116.0176466-373-259226007622768/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:45:17 compute-0 sudo[38900]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:17 compute-0 sudo[39052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhniosnpuafdujzamwtskckeufmmstai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957117.3699884-391-158676666635795/AnsiballZ_dnf.py'
Feb 01 14:45:17 compute-0 sudo[39052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:17 compute-0 python3.9[39054]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 14:45:20 compute-0 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Feb 01 14:45:20 compute-0 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Feb 01 14:45:21 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 01 14:45:21 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 01 14:45:21 compute-0 systemd[1]: Reloading.
Feb 01 14:45:21 compute-0 systemd-rc-local-generator[39113]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:45:21 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 01 14:45:21 compute-0 sudo[39052]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:22 compute-0 python3.9[41011]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:45:23 compute-0 python3.9[42319]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Feb 01 14:45:23 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 01 14:45:23 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 01 14:45:23 compute-0 systemd[1]: man-db-cache-update.service: Consumed 2.758s CPU time.
Feb 01 14:45:23 compute-0 systemd[1]: run-r6e341c3ff4d941d5b210cb8999349135.service: Deactivated successfully.
Feb 01 14:45:23 compute-0 python3.9[43105]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:45:24 compute-0 sudo[43256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibwonubknszrotwofblxbkcrvtclriwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957123.8305206-430-172113134166123/AnsiballZ_command.py'
Feb 01 14:45:24 compute-0 sudo[43256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:24 compute-0 python3.9[43258]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:45:24 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 01 14:45:24 compute-0 systemd[1]: Starting Authorization Manager...
Feb 01 14:45:24 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Feb 01 14:45:24 compute-0 polkitd[43475]: Started polkitd version 0.117
Feb 01 14:45:24 compute-0 polkitd[43475]: Loading rules from directory /etc/polkit-1/rules.d
Feb 01 14:45:24 compute-0 polkitd[43475]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 01 14:45:24 compute-0 polkitd[43475]: Finished loading, compiling and executing 2 rules
Feb 01 14:45:24 compute-0 systemd[1]: Started Authorization Manager.
Feb 01 14:45:24 compute-0 polkitd[43475]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Feb 01 14:45:24 compute-0 sudo[43256]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:25 compute-0 sudo[43643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqooqcafndtyztgqvndwbnxroxaxehoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957124.955181-439-258181842602493/AnsiballZ_systemd.py'
Feb 01 14:45:25 compute-0 sudo[43643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:25 compute-0 python3.9[43645]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 14:45:25 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb 01 14:45:25 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Feb 01 14:45:25 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb 01 14:45:25 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 01 14:45:25 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Feb 01 14:45:25 compute-0 sudo[43643]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:26 compute-0 python3.9[43807]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Feb 01 14:45:28 compute-0 sudo[43957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wumuqlwuwycdrtwigapnqcflcchbulth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957127.9885297-496-18806885134546/AnsiballZ_systemd.py'
Feb 01 14:45:28 compute-0 sudo[43957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:28 compute-0 python3.9[43959]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 14:45:28 compute-0 systemd[1]: Reloading.
Feb 01 14:45:28 compute-0 systemd-rc-local-generator[43986]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:45:28 compute-0 sudo[43957]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:29 compute-0 sudo[44146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkgwopyuspjxsjxfmyauatdpgmapqqcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957128.955818-496-84869456336629/AnsiballZ_systemd.py'
Feb 01 14:45:29 compute-0 sudo[44146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:29 compute-0 python3.9[44148]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 14:45:29 compute-0 systemd[1]: Reloading.
Feb 01 14:45:29 compute-0 systemd-rc-local-generator[44176]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:45:29 compute-0 sudo[44146]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:30 compute-0 sudo[44336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aydueohrsutbrcumhlosoxwcrwczimpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957130.0666726-512-116184141143732/AnsiballZ_command.py'
Feb 01 14:45:30 compute-0 sudo[44336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:30 compute-0 python3.9[44338]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:45:30 compute-0 sudo[44336]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:30 compute-0 sudo[44489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcfsmlhsuyuirotzodppevynjaffxrrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957130.6111536-520-269174959444940/AnsiballZ_command.py'
Feb 01 14:45:30 compute-0 sudo[44489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:31 compute-0 python3.9[44491]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:45:31 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Feb 01 14:45:31 compute-0 sudo[44489]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:31 compute-0 sudo[44642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhirzwnnadjdojaprrwdcsppgtztwqdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957131.19653-528-43570571013687/AnsiballZ_command.py'
Feb 01 14:45:31 compute-0 sudo[44642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:31 compute-0 python3.9[44644]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:45:33 compute-0 sudo[44642]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:33 compute-0 sudo[44804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxuvhpxyfvbzzxsdhhzqvrkzlpwkztxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957133.1877732-536-167795993590862/AnsiballZ_command.py'
Feb 01 14:45:33 compute-0 sudo[44804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:33 compute-0 python3.9[44806]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:45:33 compute-0 sudo[44804]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:34 compute-0 sudo[44957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyolneumzryvadxwpfwzhnfyqtubxdlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957133.7589777-544-278721475539905/AnsiballZ_systemd.py'
Feb 01 14:45:34 compute-0 sudo[44957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:34 compute-0 python3.9[44959]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 01 14:45:34 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 01 14:45:34 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Feb 01 14:45:34 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Feb 01 14:45:34 compute-0 systemd[1]: Starting Apply Kernel Variables...
Feb 01 14:45:34 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb 01 14:45:34 compute-0 systemd[1]: Finished Apply Kernel Variables.
Feb 01 14:45:34 compute-0 sudo[44957]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:34 compute-0 sshd-session[31306]: Connection closed by 192.168.122.30 port 45652
Feb 01 14:45:34 compute-0 sshd-session[31303]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:45:34 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Feb 01 14:45:34 compute-0 systemd[1]: session-8.scope: Consumed 1min 59.168s CPU time.
Feb 01 14:45:34 compute-0 systemd-logind[786]: Session 8 logged out. Waiting for processes to exit.
Feb 01 14:45:34 compute-0 systemd-logind[786]: Removed session 8.
Feb 01 14:45:40 compute-0 sshd-session[44989]: Accepted publickey for zuul from 192.168.122.30 port 52844 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:45:40 compute-0 systemd-logind[786]: New session 9 of user zuul.
Feb 01 14:45:40 compute-0 systemd[1]: Started Session 9 of User zuul.
Feb 01 14:45:40 compute-0 sshd-session[44989]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:45:40 compute-0 python3.9[45142]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:45:41 compute-0 sudo[45296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aispmfvjyfjreconhyzbwhzezstiykys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957141.4279137-31-15509774480925/AnsiballZ_getent.py'
Feb 01 14:45:41 compute-0 sudo[45296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:41 compute-0 python3.9[45298]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Feb 01 14:45:41 compute-0 sudo[45296]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:42 compute-0 sudo[45449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szqqfunhnzfxdqqciczljktffgchkaqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957142.2647932-39-38037513407661/AnsiballZ_group.py'
Feb 01 14:45:42 compute-0 sudo[45449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:42 compute-0 python3.9[45451]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 01 14:45:42 compute-0 groupadd[45452]: group added to /etc/group: name=openvswitch, GID=42476
Feb 01 14:45:42 compute-0 groupadd[45452]: group added to /etc/gshadow: name=openvswitch
Feb 01 14:45:42 compute-0 groupadd[45452]: new group: name=openvswitch, GID=42476
Feb 01 14:45:42 compute-0 sudo[45449]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:43 compute-0 sudo[45607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amvgncmmcszywksvitnmacuuzwuodptc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957142.9791234-47-132403082004063/AnsiballZ_user.py'
Feb 01 14:45:43 compute-0 sudo[45607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:43 compute-0 python3.9[45609]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 01 14:45:43 compute-0 useradd[45611]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Feb 01 14:45:43 compute-0 useradd[45611]: add 'openvswitch' to group 'hugetlbfs'
Feb 01 14:45:43 compute-0 useradd[45611]: add 'openvswitch' to shadow group 'hugetlbfs'
Feb 01 14:45:43 compute-0 sudo[45607]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:44 compute-0 sudo[45767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tblylirskajbjheikuivtnnzgrtdwxpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957143.8568692-57-272773627116290/AnsiballZ_setup.py'
Feb 01 14:45:44 compute-0 sudo[45767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:44 compute-0 python3.9[45769]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 01 14:45:44 compute-0 sudo[45767]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:44 compute-0 sudo[45851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcwuwnshhovzgsrxcmpwqctypuvskmlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957143.8568692-57-272773627116290/AnsiballZ_dnf.py'
Feb 01 14:45:44 compute-0 sudo[45851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:45 compute-0 python3.9[45853]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 01 14:45:47 compute-0 sudo[45851]: pam_unix(sudo:session): session closed for user root
Feb 01 14:45:47 compute-0 sudo[46015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqximdqpzqyrnhiqusrfrogjhyepxxnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957147.4581175-71-238011701513200/AnsiballZ_dnf.py'
Feb 01 14:45:47 compute-0 sudo[46015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:45:48 compute-0 python3.9[46017]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 14:45:58 compute-0 kernel: SELinux:  Converting 2739 SID table entries...
Feb 01 14:45:58 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 01 14:45:58 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 01 14:45:58 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 01 14:45:58 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 01 14:45:58 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 01 14:45:58 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 01 14:45:58 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 01 14:45:58 compute-0 groupadd[46040]: group added to /etc/group: name=unbound, GID=994
Feb 01 14:45:58 compute-0 groupadd[46040]: group added to /etc/gshadow: name=unbound
Feb 01 14:45:58 compute-0 groupadd[46040]: new group: name=unbound, GID=994
Feb 01 14:45:58 compute-0 useradd[46047]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Feb 01 14:45:58 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Feb 01 14:45:58 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Feb 01 14:45:59 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 01 14:45:59 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 01 14:45:59 compute-0 systemd[1]: Reloading.
Feb 01 14:45:59 compute-0 systemd-rc-local-generator[46548]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:45:59 compute-0 systemd-sysv-generator[46552]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:45:59 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 01 14:46:00 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 01 14:46:00 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 01 14:46:00 compute-0 systemd[1]: run-r67e3b1238b9e4b72bb6453438317428f.service: Deactivated successfully.
Feb 01 14:46:00 compute-0 sudo[46015]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:00 compute-0 sudo[47116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlfqojnvvcuhezohneoxozhumzrzrzxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957160.2897267-79-35447063102191/AnsiballZ_systemd.py'
Feb 01 14:46:00 compute-0 sudo[47116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:01 compute-0 python3.9[47118]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 01 14:46:01 compute-0 systemd[1]: Reloading.
Feb 01 14:46:01 compute-0 systemd-sysv-generator[47152]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:46:01 compute-0 systemd-rc-local-generator[47144]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:46:01 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Feb 01 14:46:01 compute-0 chown[47160]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Feb 01 14:46:01 compute-0 ovs-ctl[47165]: /etc/openvswitch/conf.db does not exist ... (warning).
Feb 01 14:46:01 compute-0 ovs-ctl[47165]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Feb 01 14:46:01 compute-0 ovs-ctl[47165]: Starting ovsdb-server [  OK  ]
Feb 01 14:46:01 compute-0 ovs-vsctl[47214]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Feb 01 14:46:01 compute-0 ovs-vsctl[47234]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"c3bd6005-873a-4620-bb39-624ed33e90e2\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Feb 01 14:46:01 compute-0 ovs-ctl[47165]: Configuring Open vSwitch system IDs [  OK  ]
Feb 01 14:46:01 compute-0 ovs-ctl[47165]: Enabling remote OVSDB managers [  OK  ]
Feb 01 14:46:01 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Feb 01 14:46:01 compute-0 ovs-vsctl[47240]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb 01 14:46:01 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Feb 01 14:46:02 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Feb 01 14:46:02 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Feb 01 14:46:02 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Feb 01 14:46:02 compute-0 ovs-ctl[47284]: Inserting openvswitch module [  OK  ]
Feb 01 14:46:02 compute-0 ovs-ctl[47253]: Starting ovs-vswitchd [  OK  ]
Feb 01 14:46:02 compute-0 ovs-vsctl[47303]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb 01 14:46:02 compute-0 ovs-ctl[47253]: Enabling remote OVSDB managers [  OK  ]
Feb 01 14:46:02 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Feb 01 14:46:02 compute-0 systemd[1]: Starting Open vSwitch...
Feb 01 14:46:02 compute-0 systemd[1]: Finished Open vSwitch.
Feb 01 14:46:02 compute-0 sudo[47116]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:03 compute-0 python3.9[47454]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:46:03 compute-0 sudo[47604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkfbdqflhpatxeuomkftbcsspdgnklev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957163.3308907-97-233142130246657/AnsiballZ_sefcontext.py'
Feb 01 14:46:03 compute-0 sudo[47604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:04 compute-0 python3.9[47606]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Feb 01 14:46:04 compute-0 kernel: SELinux:  Converting 2753 SID table entries...
Feb 01 14:46:04 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 01 14:46:04 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 01 14:46:04 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 01 14:46:04 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 01 14:46:04 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 01 14:46:04 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 01 14:46:04 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 01 14:46:05 compute-0 sudo[47604]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:05 compute-0 python3.9[47761]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:46:06 compute-0 sudo[47917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxcgwuwpascrqpuecmjbbjsprangajjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957166.4627244-115-215708371958155/AnsiballZ_dnf.py'
Feb 01 14:46:06 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Feb 01 14:46:06 compute-0 sudo[47917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:06 compute-0 python3.9[47919]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 14:46:08 compute-0 sudo[47917]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:08 compute-0 sudo[48070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpvczhpzftxwarjgcptecdoezcjuywea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957168.1864254-123-198098782801742/AnsiballZ_command.py'
Feb 01 14:46:08 compute-0 sudo[48070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:08 compute-0 python3.9[48072]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:46:09 compute-0 sudo[48070]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:10 compute-0 sudo[48357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwdqwiziyzaqlqcdkusuunlgzyqwkxpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957169.6583622-131-65172795648919/AnsiballZ_file.py'
Feb 01 14:46:10 compute-0 sudo[48357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:10 compute-0 python3.9[48359]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Feb 01 14:46:10 compute-0 sudo[48357]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:11 compute-0 python3.9[48509]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:46:11 compute-0 sudo[48661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyfsminonrfehlwoqhcgpyxxjgpxtzfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957171.2127664-147-246144225639366/AnsiballZ_dnf.py'
Feb 01 14:46:11 compute-0 sudo[48661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:11 compute-0 python3.9[48663]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 14:46:13 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 01 14:46:13 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 01 14:46:13 compute-0 systemd[1]: Reloading.
Feb 01 14:46:13 compute-0 systemd-rc-local-generator[48694]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:46:13 compute-0 systemd-sysv-generator[48701]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:46:13 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 01 14:46:13 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 01 14:46:13 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 01 14:46:13 compute-0 systemd[1]: run-r266032b50d7449788b8ae9995d586317.service: Deactivated successfully.
Feb 01 14:46:13 compute-0 sudo[48661]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:14 compute-0 sudo[48978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oastqzurvgigheqyjogogddtwwdfgekk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957174.0382378-155-119825557093643/AnsiballZ_systemd.py'
Feb 01 14:46:14 compute-0 sudo[48978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:14 compute-0 python3.9[48980]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 01 14:46:14 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb 01 14:46:14 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Feb 01 14:46:14 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Feb 01 14:46:14 compute-0 systemd[1]: Stopping Network Manager...
Feb 01 14:46:14 compute-0 NetworkManager[7185]: <info>  [1769957174.5328] caught SIGTERM, shutting down normally.
Feb 01 14:46:14 compute-0 NetworkManager[7185]: <info>  [1769957174.5339] dhcp4 (eth0): canceled DHCP transaction
Feb 01 14:46:14 compute-0 NetworkManager[7185]: <info>  [1769957174.5340] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 01 14:46:14 compute-0 NetworkManager[7185]: <info>  [1769957174.5340] dhcp4 (eth0): state changed no lease
Feb 01 14:46:14 compute-0 NetworkManager[7185]: <info>  [1769957174.5342] manager: NetworkManager state is now CONNECTED_SITE
Feb 01 14:46:14 compute-0 NetworkManager[7185]: <info>  [1769957174.5395] exiting (success)
Feb 01 14:46:14 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 01 14:46:14 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Feb 01 14:46:14 compute-0 systemd[1]: Stopped Network Manager.
Feb 01 14:46:14 compute-0 systemd[1]: NetworkManager.service: Consumed 11.980s CPU time, 4.1M memory peak, read 0B from disk, written 33.0K to disk.
Feb 01 14:46:14 compute-0 systemd[1]: Starting Network Manager...
Feb 01 14:46:14 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.5812] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:bc6eed0e-afac-49e7-b313-e00c329dc99a)
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.5812] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.5851] manager[0x56043530f000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 01 14:46:14 compute-0 systemd[1]: Starting Hostname Service...
Feb 01 14:46:14 compute-0 systemd[1]: Started Hostname Service.
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6800] hostname: hostname: using hostnamed
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6801] hostname: static hostname changed from (none) to "compute-0"
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6805] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6808] manager[0x56043530f000]: rfkill: Wi-Fi hardware radio set enabled
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6808] manager[0x56043530f000]: rfkill: WWAN hardware radio set enabled
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6826] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6833] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6834] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6834] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6834] manager: Networking is enabled by state file
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6836] settings: Loaded settings plugin: keyfile (internal)
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6839] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6856] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6861] dhcp: init: Using DHCP client 'internal'
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6863] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6866] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6869] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6875] device (lo): Activation: starting connection 'lo' (993b83ea-ade5-4a5e-93d7-372f4fe03bae)
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6880] device (eth0): carrier: link connected
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6883] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6887] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6887] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6892] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6897] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6900] device (eth1): carrier: link connected
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6903] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6907] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (98bb363c-97f6-5419-a1f6-12d0df6ca2e0) (indicated)
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6907] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6911] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6917] device (eth1): Activation: starting connection 'ci-private-network' (98bb363c-97f6-5419-a1f6-12d0df6ca2e0)
Feb 01 14:46:14 compute-0 systemd[1]: Started Network Manager.
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6920] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6928] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6930] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6931] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6933] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6935] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6937] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6948] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6951] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6957] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6959] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6964] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6972] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6989] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6990] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.6994] device (lo): Activation: successful, device activated.
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.7007] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.7008] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.7011] manager: NetworkManager state is now CONNECTED_LOCAL
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.7012] device (eth1): Activation: successful, device activated.
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.7619] dhcp4 (eth0): state changed new lease, address=38.102.83.238
Feb 01 14:46:14 compute-0 sudo[48978]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.7625] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 01 14:46:14 compute-0 systemd[1]: Starting Network Manager Wait Online...
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.7674] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.7697] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.7698] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.7700] manager: NetworkManager state is now CONNECTED_SITE
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.7702] device (eth0): Activation: successful, device activated.
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.7705] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 01 14:46:14 compute-0 NetworkManager[48987]: <info>  [1769957174.7707] manager: startup complete
Feb 01 14:46:14 compute-0 systemd[1]: Finished Network Manager Wait Online.
Feb 01 14:46:15 compute-0 sudo[49204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atpfujpkexlvnumaoukexrcutptigsjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957174.9000506-163-119102504459635/AnsiballZ_dnf.py'
Feb 01 14:46:15 compute-0 sudo[49204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:15 compute-0 python3.9[49206]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 14:46:19 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 01 14:46:19 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 01 14:46:19 compute-0 systemd[1]: Reloading.
Feb 01 14:46:19 compute-0 systemd-sysv-generator[49262]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:46:19 compute-0 systemd-rc-local-generator[49258]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:46:19 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 01 14:46:19 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 01 14:46:19 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 01 14:46:19 compute-0 systemd[1]: run-r59cdcf63ff774da69b38f9accf4c3fb6.service: Deactivated successfully.
Feb 01 14:46:19 compute-0 sudo[49204]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:20 compute-0 sudo[49663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcpivbdbbtosztuedojpolqjnrpjrklp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957180.185299-175-154446295199092/AnsiballZ_stat.py'
Feb 01 14:46:20 compute-0 sudo[49663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:20 compute-0 python3.9[49665]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:46:20 compute-0 sudo[49663]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:21 compute-0 sudo[49815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdgongdzcqchyqklxfifcnerahriuuzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957180.7433128-184-71709051150407/AnsiballZ_ini_file.py'
Feb 01 14:46:21 compute-0 sudo[49815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:21 compute-0 python3.9[49817]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:46:21 compute-0 sudo[49815]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:21 compute-0 sudo[49969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxtmgnzdllgblfwcnemahkmxalgnnkzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957181.5712223-194-144957134068044/AnsiballZ_ini_file.py'
Feb 01 14:46:21 compute-0 sudo[49969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:21 compute-0 python3.9[49971]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:46:21 compute-0 sudo[49969]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:22 compute-0 sudo[50121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxdjiojopmejldrpppoxubcwpqaceehd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957182.1223245-194-241462374825329/AnsiballZ_ini_file.py'
Feb 01 14:46:22 compute-0 sudo[50121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:22 compute-0 python3.9[50123]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:46:22 compute-0 sudo[50121]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:22 compute-0 sudo[50273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbnaiwcmaqjtqsyhaykdnnhailffbqrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957182.6977057-209-6013025410579/AnsiballZ_ini_file.py'
Feb 01 14:46:22 compute-0 sudo[50273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:23 compute-0 python3.9[50275]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:46:23 compute-0 sudo[50273]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:23 compute-0 sudo[50425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyzxfplvjjcjjzsjprusegnssykpggbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957183.2086873-209-250246474501765/AnsiballZ_ini_file.py'
Feb 01 14:46:23 compute-0 sudo[50425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:23 compute-0 python3.9[50427]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:46:23 compute-0 sudo[50425]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:23 compute-0 sudo[50577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjoxzkmyfrfkgaqyyalbtzkcfakvuwtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957183.7506762-224-179053541521192/AnsiballZ_stat.py'
Feb 01 14:46:23 compute-0 sudo[50577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:24 compute-0 python3.9[50579]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:46:24 compute-0 sudo[50577]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:24 compute-0 sudo[50700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbpjqvbghyxamlaygsqpoirgsxawzbzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957183.7506762-224-179053541521192/AnsiballZ_copy.py'
Feb 01 14:46:24 compute-0 sudo[50700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:24 compute-0 python3.9[50702]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957183.7506762-224-179053541521192/.source _original_basename=.tbw7hkmw follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:46:24 compute-0 sudo[50700]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:24 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 01 14:46:25 compute-0 sudo[50853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icfdbimncyexuzkosqhwaeihachvpkvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957184.816126-239-270360358868999/AnsiballZ_file.py'
Feb 01 14:46:25 compute-0 sudo[50853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:25 compute-0 python3.9[50855]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:46:25 compute-0 sudo[50853]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:25 compute-0 sudo[51005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfopuagfermdydffjcrrsjyxywsfprac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957185.3177328-247-263310388317390/AnsiballZ_edpm_os_net_config_mappings.py'
Feb 01 14:46:25 compute-0 sudo[51005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:25 compute-0 python3.9[51007]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Feb 01 14:46:25 compute-0 sudo[51005]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:26 compute-0 sudo[51157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgkxjojdzdfjsjtzmxvbqtauueeqpcor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957186.0074818-256-35453758174975/AnsiballZ_file.py'
Feb 01 14:46:26 compute-0 sudo[51157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:26 compute-0 python3.9[51159]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:46:26 compute-0 sudo[51157]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:27 compute-0 sudo[51309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwolounilherhjtypqdysxkqzlcuqrzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957186.8179746-266-108672881180832/AnsiballZ_stat.py'
Feb 01 14:46:27 compute-0 sudo[51309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:27 compute-0 sudo[51309]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:27 compute-0 sudo[51432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwvudlqvgpeognddkpchrqxcwvcqabfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957186.8179746-266-108672881180832/AnsiballZ_copy.py'
Feb 01 14:46:27 compute-0 sudo[51432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:27 compute-0 sudo[51432]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:28 compute-0 sudo[51584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuteenrxnikziaiatuucbfeclfdrxqom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957187.9161482-281-195125635379733/AnsiballZ_slurp.py'
Feb 01 14:46:28 compute-0 sudo[51584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:28 compute-0 python3.9[51586]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Feb 01 14:46:28 compute-0 sudo[51584]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:29 compute-0 sudo[51759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crnzcwkcyngmrsnshmkyjctxmkbjbdzn ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957188.6270986-290-176120901093420/async_wrapper.py j837567535167 300 /home/zuul/.ansible/tmp/ansible-tmp-1769957188.6270986-290-176120901093420/AnsiballZ_edpm_os_net_config.py _'
Feb 01 14:46:29 compute-0 sudo[51759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:29 compute-0 ansible-async_wrapper.py[51761]: Invoked with j837567535167 300 /home/zuul/.ansible/tmp/ansible-tmp-1769957188.6270986-290-176120901093420/AnsiballZ_edpm_os_net_config.py _
Feb 01 14:46:29 compute-0 ansible-async_wrapper.py[51764]: Starting module and watcher
Feb 01 14:46:29 compute-0 ansible-async_wrapper.py[51764]: Start watching 51765 (300)
Feb 01 14:46:29 compute-0 ansible-async_wrapper.py[51765]: Start module (51765)
Feb 01 14:46:29 compute-0 ansible-async_wrapper.py[51761]: Return async_wrapper task started.
Feb 01 14:46:29 compute-0 sudo[51759]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:29 compute-0 python3.9[51766]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Feb 01 14:46:30 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Feb 01 14:46:30 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Feb 01 14:46:30 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Feb 01 14:46:30 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Feb 01 14:46:30 compute-0 kernel: cfg80211: failed to load regulatory.db
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.5728] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51767 uid=0 result="success"
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.5754] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51767 uid=0 result="success"
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6467] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6471] audit: op="connection-add" uuid="bb3c6b02-6650-44b1-b29e-a73688a7f962" name="br-ex-br" pid=51767 uid=0 result="success"
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6494] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6497] audit: op="connection-add" uuid="9ea40ce2-b169-446f-bdb0-6b894c24e30c" name="br-ex-port" pid=51767 uid=0 result="success"
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6516] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6519] audit: op="connection-add" uuid="e2adbb49-e2d3-43b6-86fa-16ac6b1b47ae" name="eth1-port" pid=51767 uid=0 result="success"
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6540] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6542] audit: op="connection-add" uuid="a49e08c5-32d1-4198-85f2-a0171be3d5a1" name="vlan20-port" pid=51767 uid=0 result="success"
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6557] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6559] audit: op="connection-add" uuid="b7755b43-8a91-4d3f-a7ba-7a331cd05355" name="vlan21-port" pid=51767 uid=0 result="success"
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6571] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6574] audit: op="connection-add" uuid="cab2260a-ed04-4d14-8a3b-3b49c1bea63e" name="vlan22-port" pid=51767 uid=0 result="success"
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6586] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6588] audit: op="connection-add" uuid="edb26f41-63cc-4950-9b08-0a4cf7ca45e4" name="vlan23-port" pid=51767 uid=0 result="success"
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6609] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout" pid=51767 uid=0 result="success"
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6628] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6630] audit: op="connection-add" uuid="cc597100-9c89-42bf-8c8f-2fbabfb34bac" name="br-ex-if" pid=51767 uid=0 result="success"
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6676] audit: op="connection-update" uuid="98bb363c-97f6-5419-a1f6-12d0df6ca2e0" name="ci-private-network" args="connection.timestamp,connection.controller,connection.master,connection.port-type,connection.slave-type,ipv4.addresses,ipv4.dns,ipv4.method,ipv4.routes,ipv4.never-default,ipv4.routing-rules,ovs-interface.type,ipv6.addresses,ipv6.dns,ipv6.method,ipv6.addr-gen-mode,ipv6.routes,ipv6.routing-rules,ovs-external-ids.data" pid=51767 uid=0 result="success"
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6705] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6709] audit: op="connection-add" uuid="c829836a-8093-42c6-94fe-e2f2eb906a76" name="vlan20-if" pid=51767 uid=0 result="success"
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6739] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6742] audit: op="connection-add" uuid="44823588-c624-432d-897b-bf1351217920" name="vlan21-if" pid=51767 uid=0 result="success"
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6771] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6775] audit: op="connection-add" uuid="f6254eb2-3870-478c-8fa6-d72693ac70ed" name="vlan22-if" pid=51767 uid=0 result="success"
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6806] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6809] audit: op="connection-add" uuid="981ba108-96f7-41eb-9bfb-f97b212e521e" name="vlan23-if" pid=51767 uid=0 result="success"
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6830] audit: op="connection-delete" uuid="91277a2e-344e-3388-a112-2b38838ac4e5" name="Wired connection 1" pid=51767 uid=0 result="success"
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6852] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <warn>  [1769957191.6858] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6871] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6886] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (bb3c6b02-6650-44b1-b29e-a73688a7f962)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6887] audit: op="connection-activate" uuid="bb3c6b02-6650-44b1-b29e-a73688a7f962" name="br-ex-br" pid=51767 uid=0 result="success"
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6891] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <warn>  [1769957191.6892] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6903] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6910] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (9ea40ce2-b169-446f-bdb0-6b894c24e30c)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6915] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <warn>  [1769957191.6916] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6925] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6932] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (e2adbb49-e2d3-43b6-86fa-16ac6b1b47ae)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6937] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <warn>  [1769957191.6938] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6948] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6956] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (a49e08c5-32d1-4198-85f2-a0171be3d5a1)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6960] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <warn>  [1769957191.6961] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6973] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6980] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (b7755b43-8a91-4d3f-a7ba-7a331cd05355)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6984] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <warn>  [1769957191.6986] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.6996] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7006] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (cab2260a-ed04-4d14-8a3b-3b49c1bea63e)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7010] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <warn>  [1769957191.7011] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7021] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7029] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (edb26f41-63cc-4950-9b08-0a4cf7ca45e4)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7030] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7036] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7040] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7054] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <warn>  [1769957191.7055] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7060] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7068] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (cc597100-9c89-42bf-8c8f-2fbabfb34bac)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7069] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7076] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7080] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7082] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7084] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7105] device (eth1): disconnecting for new activation request.
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7106] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7111] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7115] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7116] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7122] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <warn>  [1769957191.7124] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7131] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7141] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (c829836a-8093-42c6-94fe-e2f2eb906a76)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7142] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7148] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7151] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7154] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7159] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <warn>  [1769957191.7161] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7168] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7175] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (44823588-c624-432d-897b-bf1351217920)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7176] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7183] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7187] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7190] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7195] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <warn>  [1769957191.7197] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7204] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7212] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (f6254eb2-3870-478c-8fa6-d72693ac70ed)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7214] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7219] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7222] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7224] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7229] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <warn>  [1769957191.7231] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7237] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7245] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (981ba108-96f7-41eb-9bfb-f97b212e521e)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7246] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7252] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7255] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7258] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Feb 01 14:46:31 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7261] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7287] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode" pid=51767 uid=0 result="success"
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7292] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7299] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7302] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7316] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7325] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 kernel: ovs-system: entered promiscuous mode
Feb 01 14:46:31 compute-0 kernel: Timeout policy base is empty
Feb 01 14:46:31 compute-0 systemd-udevd[51772]: Network interface NamePolicy= disabled on kernel command line.
Feb 01 14:46:31 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7389] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7396] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7400] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7409] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7417] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7424] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7427] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7436] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7444] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7451] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7454] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7464] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7471] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7477] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7479] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7489] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7496] dhcp4 (eth0): canceled DHCP transaction
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7497] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7497] dhcp4 (eth0): state changed no lease
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7499] dhcp4 (eth0): activation: beginning transaction (no timeout)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7515] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7522] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51767 uid=0 result="fail" reason="Device is not activated"
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7532] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7542] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Feb 01 14:46:31 compute-0 kernel: br-ex: entered promiscuous mode
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7633] device (eth1): Activation: starting connection 'ci-private-network' (98bb363c-97f6-5419-a1f6-12d0df6ca2e0)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7640] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7642] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7646] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7648] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7650] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7653] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7656] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 kernel: vlan21: entered promiscuous mode
Feb 01 14:46:31 compute-0 systemd-udevd[51771]: Network interface NamePolicy= disabled on kernel command line.
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7667] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7673] dhcp4 (eth0): state changed new lease, address=38.102.83.238
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7688] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7694] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7699] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7703] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7709] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7712] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7716] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7719] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7722] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7726] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7729] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 kernel: vlan20: entered promiscuous mode
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7732] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7736] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7741] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7745] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7751] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7781] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7834] device (eth1): state change: config -> deactivating (reason 'new-activation', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7836] device (eth1): released from controller device eth1
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7844] device (eth1): disconnecting for new activation request.
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7845] audit: op="connection-activate" uuid="98bb363c-97f6-5419-a1f6-12d0df6ca2e0" name="ci-private-network" pid=51767 uid=0 result="success"
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7851] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7868] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7873] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7875] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7896] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7903] device (eth1): Activation: starting connection 'ci-private-network' (98bb363c-97f6-5419-a1f6-12d0df6ca2e0)
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7906] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51767 uid=0 result="success"
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7928] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7932] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7938] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 kernel: vlan22: entered promiscuous mode
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7962] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7972] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7980] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7991] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7994] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.7997] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.8002] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.8015] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.8025] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.8029] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.8036] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.8042] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.8050] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.8057] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.8064] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.8074] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.8084] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.8087] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.8092] device (eth1): Activation: successful, device activated.
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.8099] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.8100] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.8106] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 01 14:46:31 compute-0 kernel: vlan23: entered promiscuous mode
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.8221] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.8234] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.8254] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.8256] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 01 14:46:31 compute-0 NetworkManager[48987]: <info>  [1769957191.8263] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 01 14:46:32 compute-0 irqbalance[781]: Cannot change IRQ 26 affinity: Operation not permitted
Feb 01 14:46:32 compute-0 irqbalance[781]: IRQ 26 affinity is now unmanaged
Feb 01 14:46:32 compute-0 NetworkManager[48987]: <info>  [1769957192.9561] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51767 uid=0 result="success"
Feb 01 14:46:33 compute-0 sudo[52129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeppxhnleffhkdcxlvhrvamdentnrcwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957192.5536277-290-271291263274235/AnsiballZ_async_status.py'
Feb 01 14:46:33 compute-0 sudo[52129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:33 compute-0 NetworkManager[48987]: <info>  [1769957193.1755] checkpoint[0x5604352e5950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Feb 01 14:46:33 compute-0 NetworkManager[48987]: <info>  [1769957193.1757] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51767 uid=0 result="success"
Feb 01 14:46:33 compute-0 python3.9[52131]: ansible-ansible.legacy.async_status Invoked with jid=j837567535167.51761 mode=status _async_dir=/root/.ansible_async
Feb 01 14:46:33 compute-0 sudo[52129]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:33 compute-0 NetworkManager[48987]: <info>  [1769957193.5200] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51767 uid=0 result="success"
Feb 01 14:46:33 compute-0 NetworkManager[48987]: <info>  [1769957193.5216] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51767 uid=0 result="success"
Feb 01 14:46:33 compute-0 NetworkManager[48987]: <info>  [1769957193.7402] audit: op="networking-control" arg="global-dns-configuration" pid=51767 uid=0 result="success"
Feb 01 14:46:33 compute-0 NetworkManager[48987]: <info>  [1769957193.7427] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Feb 01 14:46:33 compute-0 NetworkManager[48987]: <info>  [1769957193.7489] audit: op="networking-control" arg="global-dns-configuration" pid=51767 uid=0 result="success"
Feb 01 14:46:33 compute-0 NetworkManager[48987]: <info>  [1769957193.7516] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51767 uid=0 result="success"
Feb 01 14:46:33 compute-0 NetworkManager[48987]: <info>  [1769957193.9035] checkpoint[0x5604352e5a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Feb 01 14:46:33 compute-0 NetworkManager[48987]: <info>  [1769957193.9040] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51767 uid=0 result="success"
Feb 01 14:46:33 compute-0 ansible-async_wrapper.py[51765]: Module complete (51765)
Feb 01 14:46:34 compute-0 ansible-async_wrapper.py[51764]: Done in kid B.
Feb 01 14:46:36 compute-0 sudo[52235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfvantzhezberurkgjukvnnhfvvuehjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957192.5536277-290-271291263274235/AnsiballZ_async_status.py'
Feb 01 14:46:36 compute-0 sudo[52235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:36 compute-0 python3.9[52237]: ansible-ansible.legacy.async_status Invoked with jid=j837567535167.51761 mode=status _async_dir=/root/.ansible_async
Feb 01 14:46:36 compute-0 sudo[52235]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:36 compute-0 sudo[52335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyibzflpcdpihtwxqelhcxccjqwsnxdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957192.5536277-290-271291263274235/AnsiballZ_async_status.py'
Feb 01 14:46:36 compute-0 sudo[52335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:37 compute-0 python3.9[52337]: ansible-ansible.legacy.async_status Invoked with jid=j837567535167.51761 mode=cleanup _async_dir=/root/.ansible_async
Feb 01 14:46:37 compute-0 sudo[52335]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:37 compute-0 sudo[52487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkoumcmlfrneangmnmcuqtoyrjjvyjre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957197.2743666-317-82045283944133/AnsiballZ_stat.py'
Feb 01 14:46:37 compute-0 sudo[52487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:37 compute-0 python3.9[52489]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:46:37 compute-0 sudo[52487]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:37 compute-0 sudo[52610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdxjfixcbnjaivbuwtwixcllwtbbbydl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957197.2743666-317-82045283944133/AnsiballZ_copy.py'
Feb 01 14:46:37 compute-0 sudo[52610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:38 compute-0 python3.9[52612]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957197.2743666-317-82045283944133/.source.returncode _original_basename=.oksl7ea6 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:46:38 compute-0 sudo[52610]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:38 compute-0 sudo[52762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqzyrmjdxdsggylbpfaivbpykaxuxtyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957198.346142-333-261974950551774/AnsiballZ_stat.py'
Feb 01 14:46:38 compute-0 sudo[52762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:38 compute-0 python3.9[52764]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:46:38 compute-0 sudo[52762]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:39 compute-0 sudo[52885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kishbkdzzcnsyuisewsuoreiawocujnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957198.346142-333-261974950551774/AnsiballZ_copy.py'
Feb 01 14:46:39 compute-0 sudo[52885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:39 compute-0 python3.9[52887]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957198.346142-333-261974950551774/.source.cfg _original_basename=.ja_blvls follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:46:39 compute-0 sudo[52885]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:39 compute-0 sudo[53037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znziviayqvdnsicfblygebzclqyevefz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957199.3885617-348-136800497550082/AnsiballZ_systemd.py'
Feb 01 14:46:39 compute-0 sudo[53037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:39 compute-0 python3.9[53039]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 01 14:46:39 compute-0 systemd[1]: Reloading Network Manager...
Feb 01 14:46:39 compute-0 NetworkManager[48987]: <info>  [1769957199.9861] audit: op="reload" arg="0" pid=53044 uid=0 result="success"
Feb 01 14:46:39 compute-0 NetworkManager[48987]: <info>  [1769957199.9869] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Feb 01 14:46:39 compute-0 systemd[1]: Reloaded Network Manager.
Feb 01 14:46:40 compute-0 sudo[53037]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:40 compute-0 sshd-session[44992]: Connection closed by 192.168.122.30 port 52844
Feb 01 14:46:40 compute-0 sshd-session[44989]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:46:40 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Feb 01 14:46:40 compute-0 systemd[1]: session-9.scope: Consumed 42.469s CPU time.
Feb 01 14:46:40 compute-0 systemd-logind[786]: Session 9 logged out. Waiting for processes to exit.
Feb 01 14:46:40 compute-0 systemd-logind[786]: Removed session 9.
Feb 01 14:46:44 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 01 14:46:45 compute-0 sshd-session[53077]: Accepted publickey for zuul from 192.168.122.30 port 42502 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:46:45 compute-0 systemd-logind[786]: New session 10 of user zuul.
Feb 01 14:46:45 compute-0 systemd[1]: Started Session 10 of User zuul.
Feb 01 14:46:45 compute-0 sshd-session[53077]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:46:46 compute-0 python3.9[53230]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:46:47 compute-0 python3.9[53385]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 01 14:46:48 compute-0 python3.9[53578]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:46:48 compute-0 sshd-session[53080]: Connection closed by 192.168.122.30 port 42502
Feb 01 14:46:48 compute-0 sshd-session[53077]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:46:48 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Feb 01 14:46:48 compute-0 systemd[1]: session-10.scope: Consumed 1.978s CPU time.
Feb 01 14:46:48 compute-0 systemd-logind[786]: Session 10 logged out. Waiting for processes to exit.
Feb 01 14:46:48 compute-0 systemd-logind[786]: Removed session 10.
Feb 01 14:46:50 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 01 14:46:53 compute-0 sshd-session[53608]: Accepted publickey for zuul from 192.168.122.30 port 54030 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:46:53 compute-0 systemd-logind[786]: New session 11 of user zuul.
Feb 01 14:46:53 compute-0 systemd[1]: Started Session 11 of User zuul.
Feb 01 14:46:53 compute-0 sshd-session[53608]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:46:54 compute-0 python3.9[53761]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:46:55 compute-0 python3.9[53915]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:46:55 compute-0 sudo[54070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcibvbmstvdtevcbiugipqkcffidvuol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957215.5853672-35-205621368266890/AnsiballZ_setup.py'
Feb 01 14:46:55 compute-0 sudo[54070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:56 compute-0 python3.9[54072]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 01 14:46:56 compute-0 sudo[54070]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:56 compute-0 sudo[54154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqvgpyurzjlxrwhdguvfbaududgjnrny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957215.5853672-35-205621368266890/AnsiballZ_dnf.py'
Feb 01 14:46:56 compute-0 sudo[54154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:57 compute-0 python3.9[54156]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 14:46:58 compute-0 sudo[54154]: pam_unix(sudo:session): session closed for user root
Feb 01 14:46:58 compute-0 sudo[54307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pplcnosnmosfjvkssrgwglkfkzktnltx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957218.4726653-47-249268239028775/AnsiballZ_setup.py'
Feb 01 14:46:58 compute-0 sudo[54307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:46:59 compute-0 python3.9[54309]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 01 14:46:59 compute-0 sudo[54307]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:00 compute-0 sudo[54503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjrlyvjalqeuytmuzvxvefkqgjommmpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957219.6068068-58-77166216474563/AnsiballZ_file.py'
Feb 01 14:47:00 compute-0 sudo[54503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:00 compute-0 python3.9[54505]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:47:00 compute-0 sudo[54503]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:00 compute-0 sudo[54655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byskfoyrdofbzhapqyfdwvwixnzcljva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957220.4448473-66-120403594684193/AnsiballZ_command.py'
Feb 01 14:47:00 compute-0 sudo[54655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:01 compute-0 python3.9[54657]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:47:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3401655811-merged.mount: Deactivated successfully.
Feb 01 14:47:01 compute-0 podman[54658]: 2026-02-01 14:47:01.202544725 +0000 UTC m=+0.049016496 system refresh
Feb 01 14:47:01 compute-0 sudo[54655]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:01 compute-0 sudo[54818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqkczckhpppnxnbfoqemglqzjptddgsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957221.36314-74-107677092904576/AnsiballZ_stat.py'
Feb 01 14:47:01 compute-0 sudo[54818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:01 compute-0 python3.9[54820]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:47:01 compute-0 sudo[54818]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 01 14:47:02 compute-0 sudo[54942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faobpbuyqcfuzdtalzmfbqxqkopdnueq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957221.36314-74-107677092904576/AnsiballZ_copy.py'
Feb 01 14:47:02 compute-0 sudo[54942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:02 compute-0 python3.9[54944]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957221.36314-74-107677092904576/.source.json follow=False _original_basename=podman_network_config.j2 checksum=df849b85257a814448226e82824bb3e704ca309b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:47:02 compute-0 sudo[54942]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:02 compute-0 sudo[55094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vktwlesnjrxwavfauebfzkjrevjikjcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957222.7425578-89-167218906879635/AnsiballZ_stat.py'
Feb 01 14:47:02 compute-0 sudo[55094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:03 compute-0 python3.9[55096]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:47:03 compute-0 sudo[55094]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:03 compute-0 sudo[55217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkzuozwzsopskvsjesdtviijkdxlvilt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957222.7425578-89-167218906879635/AnsiballZ_copy.py'
Feb 01 14:47:03 compute-0 sudo[55217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:03 compute-0 python3.9[55219]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957222.7425578-89-167218906879635/.source.conf follow=False _original_basename=registries.conf.j2 checksum=4ef81be63c2e12f99316ad95ffda51a525eb684e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:47:03 compute-0 sudo[55217]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:04 compute-0 sudo[55369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdifxwfwpevpagswmqaqxihknmfyblod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957223.7691453-105-245456343338935/AnsiballZ_ini_file.py'
Feb 01 14:47:04 compute-0 sudo[55369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:04 compute-0 python3.9[55371]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:47:04 compute-0 sudo[55369]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:04 compute-0 sudo[55521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jswiyfyviwvcwuvzygcxznynzmybhcxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957224.4391723-105-71795628981/AnsiballZ_ini_file.py'
Feb 01 14:47:04 compute-0 sudo[55521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:04 compute-0 python3.9[55523]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:47:04 compute-0 sudo[55521]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:05 compute-0 sudo[55673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moconpctypwahogwznabsizfrhcxhpvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957224.9804838-105-190856027911169/AnsiballZ_ini_file.py'
Feb 01 14:47:05 compute-0 sudo[55673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:05 compute-0 python3.9[55675]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:47:05 compute-0 sudo[55673]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:05 compute-0 sudo[55825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smwwsksvsgmysmhlfpokytkfvksnhcua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957225.5245833-105-9967224972132/AnsiballZ_ini_file.py'
Feb 01 14:47:05 compute-0 sudo[55825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:05 compute-0 python3.9[55827]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:47:05 compute-0 sudo[55825]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:06 compute-0 sudo[55977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drzusekowqdbhhekmtmfnmuabgmiyoim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957226.11769-136-87966766998765/AnsiballZ_dnf.py'
Feb 01 14:47:06 compute-0 sudo[55977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:06 compute-0 python3.9[55979]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 14:47:07 compute-0 sudo[55977]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:08 compute-0 sudo[56130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muszyvshskmxvkkavpwiiznbvbwqccbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957228.1448512-147-21633117788081/AnsiballZ_setup.py'
Feb 01 14:47:08 compute-0 sudo[56130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:08 compute-0 python3.9[56132]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:47:08 compute-0 sudo[56130]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:09 compute-0 sudo[56284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-medzfjoozthqiteawvmourodetypqqis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957228.8894455-155-165492466024321/AnsiballZ_stat.py'
Feb 01 14:47:09 compute-0 sudo[56284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:09 compute-0 python3.9[56286]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:47:09 compute-0 sudo[56284]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:09 compute-0 sudo[56436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqkucppmpvoaesrurlthqhudosnclkxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957229.512027-164-61041693333328/AnsiballZ_stat.py'
Feb 01 14:47:09 compute-0 sudo[56436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:09 compute-0 python3.9[56438]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:47:09 compute-0 sudo[56436]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:10 compute-0 sudo[56588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mboxafyogimdmqbzjtifrsqkahfinmts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957230.2162554-174-67636661989888/AnsiballZ_command.py'
Feb 01 14:47:10 compute-0 sudo[56588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:10 compute-0 python3.9[56590]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:47:10 compute-0 sudo[56588]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:11 compute-0 sudo[56741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doemgntodbjouwxvacfizcpdccegfiso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957230.9443145-184-266231720918272/AnsiballZ_service_facts.py'
Feb 01 14:47:11 compute-0 sudo[56741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:11 compute-0 python3.9[56743]: ansible-service_facts Invoked
Feb 01 14:47:11 compute-0 network[56760]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 01 14:47:11 compute-0 network[56761]: 'network-scripts' will be removed from distribution in near future.
Feb 01 14:47:11 compute-0 network[56762]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 01 14:47:14 compute-0 sudo[56741]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:15 compute-0 sudo[57045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkqghjzcamvkuomstwcvjelxydpnsquk ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769957235.4505699-199-33170595786915/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769957235.4505699-199-33170595786915/args'
Feb 01 14:47:15 compute-0 sudo[57045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:15 compute-0 sudo[57045]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:16 compute-0 sudo[57212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvezbfoukyfdvthaqfiovnkdrsvvgkit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957236.001799-210-187121862078562/AnsiballZ_dnf.py'
Feb 01 14:47:16 compute-0 sudo[57212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:16 compute-0 python3.9[57214]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 14:47:17 compute-0 sudo[57212]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:18 compute-0 sudo[57365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xztwqyevvjcbcsfzhpuczrezuwiizphx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957237.9423523-223-265243312338769/AnsiballZ_package_facts.py'
Feb 01 14:47:18 compute-0 sudo[57365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:18 compute-0 python3.9[57367]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Feb 01 14:47:19 compute-0 sudo[57365]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:19 compute-0 sudo[57517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpdnpebcxzzsqlzekdunreknjhogdezz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957239.447651-233-246819591713782/AnsiballZ_stat.py'
Feb 01 14:47:19 compute-0 sudo[57517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:19 compute-0 python3.9[57519]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:47:20 compute-0 sudo[57517]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:20 compute-0 sudo[57642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbitmzmrqxgstszztlxvfotoelnbjnxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957239.447651-233-246819591713782/AnsiballZ_copy.py'
Feb 01 14:47:20 compute-0 sudo[57642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:20 compute-0 python3.9[57644]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957239.447651-233-246819591713782/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:47:20 compute-0 sudo[57642]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:20 compute-0 sudo[57796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksynwmqzyuivurmbhhppjatqtwcmlaul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957240.6963396-248-108917328663792/AnsiballZ_stat.py'
Feb 01 14:47:20 compute-0 sudo[57796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:21 compute-0 python3.9[57798]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:47:21 compute-0 sudo[57796]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:21 compute-0 sudo[57921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewkvudedmbbwihcixdbdbbmwgqljixct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957240.6963396-248-108917328663792/AnsiballZ_copy.py'
Feb 01 14:47:21 compute-0 sudo[57921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:21 compute-0 python3.9[57923]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957240.6963396-248-108917328663792/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:47:21 compute-0 sudo[57921]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:22 compute-0 sudo[58075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puuysfkvoqgenyhdjhsszocnfxvwkpjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957241.9641306-269-241284218177553/AnsiballZ_lineinfile.py'
Feb 01 14:47:22 compute-0 sudo[58075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:22 compute-0 python3.9[58077]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:47:22 compute-0 sudo[58075]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:23 compute-0 sudo[58229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odpadufrshwerfjyipobrekvqiniqhna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957243.1170235-284-214386026034631/AnsiballZ_setup.py'
Feb 01 14:47:23 compute-0 sudo[58229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:23 compute-0 python3.9[58231]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 01 14:47:23 compute-0 sudo[58229]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:24 compute-0 sudo[58313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejwvrkekoxpyyvzojuegjtzbkyreqoki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957243.1170235-284-214386026034631/AnsiballZ_systemd.py'
Feb 01 14:47:24 compute-0 sudo[58313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:24 compute-0 python3.9[58315]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 14:47:24 compute-0 sudo[58313]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:25 compute-0 sudo[58467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jndoormsniywyatyjpndfjfyhwxlhqqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957245.234385-300-235337597353553/AnsiballZ_setup.py'
Feb 01 14:47:25 compute-0 sudo[58467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:25 compute-0 python3.9[58469]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 01 14:47:26 compute-0 sudo[58467]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:26 compute-0 sudo[58551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhuypweekgpndihwxjvolasxhjppbzel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957245.234385-300-235337597353553/AnsiballZ_systemd.py'
Feb 01 14:47:26 compute-0 sudo[58551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:26 compute-0 python3.9[58553]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 01 14:47:26 compute-0 chronyd[800]: chronyd exiting
Feb 01 14:47:26 compute-0 systemd[1]: Stopping NTP client/server...
Feb 01 14:47:26 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Feb 01 14:47:26 compute-0 systemd[1]: Stopped NTP client/server.
Feb 01 14:47:26 compute-0 systemd[1]: Starting NTP client/server...
Feb 01 14:47:26 compute-0 chronyd[58562]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb 01 14:47:26 compute-0 chronyd[58562]: Frequency -28.298 +/- 0.211 ppm read from /var/lib/chrony/drift
Feb 01 14:47:26 compute-0 chronyd[58562]: Loaded seccomp filter (level 2)
Feb 01 14:47:26 compute-0 systemd[1]: Started NTP client/server.
Feb 01 14:47:26 compute-0 sudo[58551]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:27 compute-0 sshd-session[53611]: Connection closed by 192.168.122.30 port 54030
Feb 01 14:47:27 compute-0 sshd-session[53608]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:47:27 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Feb 01 14:47:27 compute-0 systemd[1]: session-11.scope: Consumed 22.408s CPU time.
Feb 01 14:47:27 compute-0 systemd-logind[786]: Session 11 logged out. Waiting for processes to exit.
Feb 01 14:47:27 compute-0 systemd-logind[786]: Removed session 11.
Feb 01 14:47:31 compute-0 sshd-session[58588]: Accepted publickey for zuul from 192.168.122.30 port 39982 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:47:31 compute-0 systemd-logind[786]: New session 12 of user zuul.
Feb 01 14:47:31 compute-0 systemd[1]: Started Session 12 of User zuul.
Feb 01 14:47:31 compute-0 sshd-session[58588]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:47:32 compute-0 sudo[58741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzodtwasguoubrodtkkqthzvvtovjdmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957252.0073075-17-137697097784455/AnsiballZ_file.py'
Feb 01 14:47:32 compute-0 sudo[58741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:32 compute-0 python3.9[58743]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:47:32 compute-0 sudo[58741]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:33 compute-0 sudo[58893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udruqbiwqzoqejbhendjuskywlqfcvvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957252.7536201-29-15516203598004/AnsiballZ_stat.py'
Feb 01 14:47:33 compute-0 sudo[58893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:33 compute-0 python3.9[58895]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:47:33 compute-0 sudo[58893]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:33 compute-0 sudo[59016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxvihkaqtmtegvnegoiwfcbacmkxavgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957252.7536201-29-15516203598004/AnsiballZ_copy.py'
Feb 01 14:47:33 compute-0 sudo[59016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:33 compute-0 python3.9[59018]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957252.7536201-29-15516203598004/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:47:34 compute-0 sudo[59016]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:34 compute-0 sshd-session[58591]: Connection closed by 192.168.122.30 port 39982
Feb 01 14:47:34 compute-0 sshd-session[58588]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:47:34 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Feb 01 14:47:34 compute-0 systemd[1]: session-12.scope: Consumed 1.432s CPU time.
Feb 01 14:47:34 compute-0 systemd-logind[786]: Session 12 logged out. Waiting for processes to exit.
Feb 01 14:47:34 compute-0 systemd-logind[786]: Removed session 12.
Feb 01 14:47:40 compute-0 sshd-session[59043]: Accepted publickey for zuul from 192.168.122.30 port 35200 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:47:40 compute-0 systemd-logind[786]: New session 13 of user zuul.
Feb 01 14:47:40 compute-0 systemd[1]: Started Session 13 of User zuul.
Feb 01 14:47:40 compute-0 sshd-session[59043]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:47:40 compute-0 python3.9[59196]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:47:41 compute-0 sudo[59350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sshxcdfhnxbucudlbmtxztsftxfrlclt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957261.377051-28-52293044364370/AnsiballZ_file.py'
Feb 01 14:47:41 compute-0 sudo[59350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:41 compute-0 python3.9[59352]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:47:41 compute-0 sudo[59350]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:42 compute-0 sudo[59525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbqsfzqwwroifyzgbiswelfwvwlntgxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957262.084522-36-125584435900870/AnsiballZ_stat.py'
Feb 01 14:47:42 compute-0 sudo[59525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:42 compute-0 python3.9[59527]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:47:42 compute-0 sudo[59525]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:43 compute-0 sudo[59648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoovakvvhpdulihmycpspeconfmknhfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957262.084522-36-125584435900870/AnsiballZ_copy.py'
Feb 01 14:47:43 compute-0 sudo[59648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:43 compute-0 python3.9[59650]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769957262.084522-36-125584435900870/.source.json _original_basename=.apm2majj follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:47:43 compute-0 sudo[59648]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:43 compute-0 sudo[59800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qykofnplldhouvvyoovkjhnmfxtzqsqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957263.5540307-59-214762095206510/AnsiballZ_stat.py'
Feb 01 14:47:43 compute-0 sudo[59800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:43 compute-0 python3.9[59802]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:47:43 compute-0 sudo[59800]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:44 compute-0 sudo[59923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwczfvsokilmwmdgjwsbuabqfpovthfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957263.5540307-59-214762095206510/AnsiballZ_copy.py'
Feb 01 14:47:44 compute-0 sudo[59923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:44 compute-0 python3.9[59925]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957263.5540307-59-214762095206510/.source _original_basename=.67ypkozx follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:47:44 compute-0 sudo[59923]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:44 compute-0 sudo[60075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soolxrlzjphxxtockrrpcefllnhlptno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957264.5361538-75-111197104096617/AnsiballZ_file.py'
Feb 01 14:47:44 compute-0 sudo[60075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:45 compute-0 python3.9[60077]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:47:45 compute-0 sudo[60075]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:45 compute-0 sudo[60227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxnbhoekzmxhefrejhpgqmsxpmmowrfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957265.2075205-83-134559704754399/AnsiballZ_stat.py'
Feb 01 14:47:45 compute-0 sudo[60227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:45 compute-0 python3.9[60229]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:47:45 compute-0 sudo[60227]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:46 compute-0 sudo[60350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-extzyyvkgvxzlixyraiwqyigjvhfyynx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957265.2075205-83-134559704754399/AnsiballZ_copy.py'
Feb 01 14:47:46 compute-0 sudo[60350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:46 compute-0 python3.9[60352]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957265.2075205-83-134559704754399/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:47:46 compute-0 sudo[60350]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:46 compute-0 sudo[60502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjrtvodytaagvwzpdwqcsxwfqxzmfuxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957266.327233-83-227979644496675/AnsiballZ_stat.py'
Feb 01 14:47:46 compute-0 sudo[60502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:46 compute-0 python3.9[60504]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:47:46 compute-0 sudo[60502]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:47 compute-0 sudo[60625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvkcibxwrlbjzbphovehmrgzdqurbcuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957266.327233-83-227979644496675/AnsiballZ_copy.py'
Feb 01 14:47:47 compute-0 sudo[60625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:47 compute-0 python3.9[60627]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957266.327233-83-227979644496675/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:47:47 compute-0 sudo[60625]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:47 compute-0 sudo[60777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drdegebrisbcihqqeergvgnmuxwvocrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957267.4930108-112-239608823692826/AnsiballZ_file.py'
Feb 01 14:47:47 compute-0 sudo[60777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:48 compute-0 python3.9[60779]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:47:48 compute-0 sudo[60777]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:48 compute-0 sudo[60929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkvmefhemxafkjezcfpmknijbpudpjvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957268.384963-120-233589864910945/AnsiballZ_stat.py'
Feb 01 14:47:48 compute-0 sudo[60929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:48 compute-0 python3.9[60931]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:47:48 compute-0 sudo[60929]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:49 compute-0 sudo[61052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvspvewgwxhfcbjzdcxapwqejqevbmtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957268.384963-120-233589864910945/AnsiballZ_copy.py'
Feb 01 14:47:49 compute-0 sudo[61052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:49 compute-0 python3.9[61054]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957268.384963-120-233589864910945/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:47:49 compute-0 sudo[61052]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:49 compute-0 sudo[61204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbqpcuzkaicelezthxvyrhwtmwjfndqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957269.4312303-135-124670674768825/AnsiballZ_stat.py'
Feb 01 14:47:49 compute-0 sudo[61204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:49 compute-0 python3.9[61206]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:47:49 compute-0 sudo[61204]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:50 compute-0 sudo[61327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxiebnoiigobgvnzgpsbcgacwttsoutx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957269.4312303-135-124670674768825/AnsiballZ_copy.py'
Feb 01 14:47:50 compute-0 sudo[61327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:50 compute-0 python3.9[61329]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957269.4312303-135-124670674768825/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:47:50 compute-0 sudo[61327]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:51 compute-0 sudo[61479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxtlfwrzqzisflclaodiceimdpudtnjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957270.547176-150-207223992233872/AnsiballZ_systemd.py'
Feb 01 14:47:51 compute-0 sudo[61479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:51 compute-0 python3.9[61481]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 14:47:51 compute-0 systemd[1]: Reloading.
Feb 01 14:47:51 compute-0 systemd-rc-local-generator[61507]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:47:51 compute-0 systemd-sysv-generator[61510]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:47:51 compute-0 systemd[1]: Reloading.
Feb 01 14:47:51 compute-0 systemd-rc-local-generator[61546]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:47:51 compute-0 systemd-sysv-generator[61549]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:47:51 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Feb 01 14:47:51 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Feb 01 14:47:51 compute-0 sudo[61479]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:52 compute-0 sudo[61707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swmwrljeubyumlowktfbqslptgxdjhog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957271.970541-158-117712906184099/AnsiballZ_stat.py'
Feb 01 14:47:52 compute-0 sudo[61707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:52 compute-0 python3.9[61709]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:47:52 compute-0 sudo[61707]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:52 compute-0 sudo[61830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-focfznlrlvunaalssrxoylhgdtgtvcjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957271.970541-158-117712906184099/AnsiballZ_copy.py'
Feb 01 14:47:52 compute-0 sudo[61830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:52 compute-0 python3.9[61832]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957271.970541-158-117712906184099/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:47:52 compute-0 sudo[61830]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:53 compute-0 sudo[61982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pozkomebrlgbgajrpfzcvqndfavnjilu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957272.9583347-173-11243544153189/AnsiballZ_stat.py'
Feb 01 14:47:53 compute-0 sudo[61982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:53 compute-0 python3.9[61984]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:47:53 compute-0 sudo[61982]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:53 compute-0 sudo[62105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjxftgfparytijbscrekgadbdztnpnrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957272.9583347-173-11243544153189/AnsiballZ_copy.py'
Feb 01 14:47:53 compute-0 sudo[62105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:53 compute-0 python3.9[62107]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957272.9583347-173-11243544153189/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:47:53 compute-0 sudo[62105]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:54 compute-0 sudo[62257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdvzepiztjnpeymrucrtmwhetmerikvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957274.0130887-188-158921313291229/AnsiballZ_systemd.py'
Feb 01 14:47:54 compute-0 sudo[62257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:54 compute-0 python3.9[62259]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 14:47:54 compute-0 systemd[1]: Reloading.
Feb 01 14:47:54 compute-0 systemd-rc-local-generator[62282]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:47:54 compute-0 systemd-sysv-generator[62286]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:47:54 compute-0 systemd[1]: Reloading.
Feb 01 14:47:54 compute-0 systemd-rc-local-generator[62325]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:47:54 compute-0 systemd-sysv-generator[62329]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:47:54 compute-0 systemd[1]: Starting Create netns directory...
Feb 01 14:47:55 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 01 14:47:55 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 01 14:47:55 compute-0 systemd[1]: Finished Create netns directory.
Feb 01 14:47:55 compute-0 sudo[62257]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:55 compute-0 python3.9[62485]: ansible-ansible.builtin.service_facts Invoked
Feb 01 14:47:55 compute-0 network[62502]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 01 14:47:55 compute-0 network[62503]: 'network-scripts' will be removed from distribution in near future.
Feb 01 14:47:55 compute-0 network[62504]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 01 14:47:58 compute-0 sudo[62764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdxswfcuckeqgbttvlbgdqbhvdkcwovb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957278.1103575-204-34505052856337/AnsiballZ_systemd.py'
Feb 01 14:47:58 compute-0 sudo[62764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:58 compute-0 python3.9[62766]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 14:47:58 compute-0 systemd[1]: Reloading.
Feb 01 14:47:58 compute-0 systemd-rc-local-generator[62793]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:47:58 compute-0 systemd-sysv-generator[62799]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:47:58 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Feb 01 14:47:58 compute-0 iptables.init[62806]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Feb 01 14:47:59 compute-0 iptables.init[62806]: iptables: Flushing firewall rules: [  OK  ]
Feb 01 14:47:59 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Feb 01 14:47:59 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Feb 01 14:47:59 compute-0 sudo[62764]: pam_unix(sudo:session): session closed for user root
Feb 01 14:47:59 compute-0 sudo[63000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysjhzckhsrogmohcktzgfummiuhzxiue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957279.28049-204-176343188965917/AnsiballZ_systemd.py'
Feb 01 14:47:59 compute-0 sudo[63000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:47:59 compute-0 python3.9[63002]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 14:47:59 compute-0 sudo[63000]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:00 compute-0 sudo[63154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtfpourounmgeqoxaffdzfzkwjpvawcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957280.1062832-220-251093242622009/AnsiballZ_systemd.py'
Feb 01 14:48:00 compute-0 sudo[63154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:00 compute-0 python3.9[63156]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 14:48:00 compute-0 systemd[1]: Reloading.
Feb 01 14:48:00 compute-0 systemd-rc-local-generator[63177]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:48:00 compute-0 systemd-sysv-generator[63182]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:48:00 compute-0 systemd[1]: Starting Netfilter Tables...
Feb 01 14:48:00 compute-0 systemd[1]: Finished Netfilter Tables.
Feb 01 14:48:00 compute-0 sudo[63154]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:01 compute-0 sudo[63345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdfonfdmgidtmxgalvntfcsekyzuwdbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957281.0647147-228-23435246877055/AnsiballZ_command.py'
Feb 01 14:48:01 compute-0 sudo[63345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:01 compute-0 python3.9[63347]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:48:01 compute-0 sudo[63345]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:02 compute-0 sudo[63498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upfjnoxnwrplykbqsortbyusaacjzgvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957281.9972432-242-24920536579898/AnsiballZ_stat.py'
Feb 01 14:48:02 compute-0 sudo[63498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:02 compute-0 python3.9[63500]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:48:02 compute-0 sudo[63498]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:02 compute-0 sudo[63623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqlhumpgqjiammujwkvtvpgpunfdrcqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957281.9972432-242-24920536579898/AnsiballZ_copy.py'
Feb 01 14:48:02 compute-0 sudo[63623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:02 compute-0 python3.9[63625]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957281.9972432-242-24920536579898/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:48:02 compute-0 sudo[63623]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:03 compute-0 sudo[63776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqjkdxmtyykyduxpplixmctkwglfjujf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957283.1370652-257-88758740097964/AnsiballZ_systemd.py'
Feb 01 14:48:03 compute-0 sudo[63776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:03 compute-0 python3.9[63778]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 01 14:48:03 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Feb 01 14:48:03 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Feb 01 14:48:03 compute-0 sshd[1002]: Received SIGHUP; restarting.
Feb 01 14:48:03 compute-0 sshd[1002]: Server listening on 0.0.0.0 port 22.
Feb 01 14:48:03 compute-0 sshd[1002]: Server listening on :: port 22.
Feb 01 14:48:03 compute-0 sudo[63776]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:04 compute-0 sudo[63932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcczfrmslfdbxldvbpcgbomlvlarbyzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957283.9713562-265-215760443721941/AnsiballZ_file.py'
Feb 01 14:48:04 compute-0 sudo[63932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:04 compute-0 python3.9[63934]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:48:04 compute-0 sudo[63932]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:04 compute-0 sudo[64084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxtzaphzuzoruqrroqybcvjkcrmvvwwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957284.6146722-273-55483130654784/AnsiballZ_stat.py'
Feb 01 14:48:04 compute-0 sudo[64084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:05 compute-0 python3.9[64086]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:48:05 compute-0 sudo[64084]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:05 compute-0 sudo[64207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stelxaitxikqbcnloretqlanpcfskuwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957284.6146722-273-55483130654784/AnsiballZ_copy.py'
Feb 01 14:48:05 compute-0 sudo[64207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:05 compute-0 python3.9[64209]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957284.6146722-273-55483130654784/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:48:05 compute-0 sudo[64207]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:06 compute-0 sudo[64359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpokowzhuqacjdaoimctkgeaeltoblys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957285.9651167-291-236828959394381/AnsiballZ_timezone.py'
Feb 01 14:48:06 compute-0 sudo[64359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:06 compute-0 python3.9[64361]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb 01 14:48:06 compute-0 systemd[1]: Starting Time & Date Service...
Feb 01 14:48:06 compute-0 systemd[1]: Started Time & Date Service.
Feb 01 14:48:06 compute-0 sudo[64359]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:07 compute-0 sudo[64515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxbzjxyhtkajunjzylgdxkkdhelbaxnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957287.030022-300-74937496683908/AnsiballZ_file.py'
Feb 01 14:48:07 compute-0 sudo[64515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:07 compute-0 python3.9[64517]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:48:07 compute-0 sudo[64515]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:07 compute-0 sudo[64667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zztxkwsbrggxwleguzllmjnxzijwnzoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957287.6480298-308-99109954493205/AnsiballZ_stat.py'
Feb 01 14:48:07 compute-0 sudo[64667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:08 compute-0 python3.9[64669]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:48:08 compute-0 sudo[64667]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:08 compute-0 sudo[64790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiyrgxndupuphbyuiajhukrdhwqtuuhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957287.6480298-308-99109954493205/AnsiballZ_copy.py'
Feb 01 14:48:08 compute-0 sudo[64790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:08 compute-0 python3.9[64792]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957287.6480298-308-99109954493205/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:48:08 compute-0 sudo[64790]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:09 compute-0 sudo[64942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrrpvujhirhoeuveqgaiopdytfjwhytd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957288.7809994-323-164931460708408/AnsiballZ_stat.py'
Feb 01 14:48:09 compute-0 sudo[64942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:09 compute-0 python3.9[64944]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:48:09 compute-0 sudo[64942]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:09 compute-0 sudo[65065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyiuvcodglswrwnyoeehybporihdjowt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957288.7809994-323-164931460708408/AnsiballZ_copy.py'
Feb 01 14:48:09 compute-0 sudo[65065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:09 compute-0 python3.9[65067]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957288.7809994-323-164931460708408/.source.yaml _original_basename=.t7wpb3ot follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:48:09 compute-0 sudo[65065]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:09 compute-0 sudo[65217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efjrmzwmyvqwcwxdbxgmfjwdnotbiiqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957289.7760112-338-56452916191340/AnsiballZ_stat.py'
Feb 01 14:48:09 compute-0 sudo[65217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:10 compute-0 python3.9[65219]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:48:10 compute-0 sudo[65217]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:10 compute-0 sudo[65340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtdelcarlksnnpeoxdetzjsvynqyiqcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957289.7760112-338-56452916191340/AnsiballZ_copy.py'
Feb 01 14:48:10 compute-0 sudo[65340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:10 compute-0 python3.9[65342]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957289.7760112-338-56452916191340/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:48:10 compute-0 sudo[65340]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:11 compute-0 sudo[65492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufhyhfxlcflxkjzxjpexhgzrmbwekshw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957290.8774428-353-170548599141405/AnsiballZ_command.py'
Feb 01 14:48:11 compute-0 sudo[65492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:11 compute-0 python3.9[65494]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:48:11 compute-0 sudo[65492]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:11 compute-0 sudo[65645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrillokpyssqvowwwxgyxtdmorseuifn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957291.5955193-361-121717791146729/AnsiballZ_command.py'
Feb 01 14:48:11 compute-0 sudo[65645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:12 compute-0 python3.9[65647]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:48:12 compute-0 sudo[65645]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:12 compute-0 sudo[65798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slqipfjpzktkdjcdasyygxnoxpfhnlds ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769957292.2997906-369-239715026991583/AnsiballZ_edpm_nftables_from_files.py'
Feb 01 14:48:12 compute-0 sudo[65798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:12 compute-0 python3[65800]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 01 14:48:12 compute-0 sudo[65798]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:13 compute-0 sudo[65950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chkaptdorclylsvfstlspghncbmyvygs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957293.1326847-377-14091338783412/AnsiballZ_stat.py'
Feb 01 14:48:13 compute-0 sudo[65950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:13 compute-0 python3.9[65952]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:48:13 compute-0 sudo[65950]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:13 compute-0 sudo[66073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xikwkwixzberavkturbnfbhhkmdznpqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957293.1326847-377-14091338783412/AnsiballZ_copy.py'
Feb 01 14:48:13 compute-0 sudo[66073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:14 compute-0 python3.9[66075]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957293.1326847-377-14091338783412/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:48:14 compute-0 sudo[66073]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:14 compute-0 sudo[66225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dshxgjgukmrrqiesogknlbwssaxjubym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957294.266055-392-90358587094445/AnsiballZ_stat.py'
Feb 01 14:48:14 compute-0 sudo[66225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:14 compute-0 python3.9[66227]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:48:14 compute-0 sudo[66225]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:15 compute-0 sudo[66348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxtqyesgigztdnjwmtkogwdcmqymejgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957294.266055-392-90358587094445/AnsiballZ_copy.py'
Feb 01 14:48:15 compute-0 sudo[66348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:15 compute-0 python3.9[66350]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957294.266055-392-90358587094445/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:48:15 compute-0 sudo[66348]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:15 compute-0 sudo[66500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdozmolvqtnjytoiqcrbzjqokcazclal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957295.4859326-407-174117493731874/AnsiballZ_stat.py'
Feb 01 14:48:15 compute-0 sudo[66500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:15 compute-0 python3.9[66502]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:48:15 compute-0 sudo[66500]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:16 compute-0 sudo[66623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqkvhyooeybvkuacxljxmurwwpdmhrxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957295.4859326-407-174117493731874/AnsiballZ_copy.py'
Feb 01 14:48:16 compute-0 sudo[66623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:16 compute-0 python3.9[66625]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957295.4859326-407-174117493731874/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:48:16 compute-0 sudo[66623]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:16 compute-0 sudo[66775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifivubrqusspbzfgxctikumdhbeackrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957296.569769-422-151688404998444/AnsiballZ_stat.py'
Feb 01 14:48:16 compute-0 sudo[66775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:17 compute-0 python3.9[66777]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:48:17 compute-0 sudo[66775]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:17 compute-0 sudo[66898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onavjssggavndafhuspqghqnuvqwgmnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957296.569769-422-151688404998444/AnsiballZ_copy.py'
Feb 01 14:48:17 compute-0 sudo[66898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:17 compute-0 python3.9[66900]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957296.569769-422-151688404998444/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:48:17 compute-0 sudo[66898]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:18 compute-0 sudo[67050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvyidzbxyvhthviqeddgmeijrrwzdjwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957297.675034-437-270657438241689/AnsiballZ_stat.py'
Feb 01 14:48:18 compute-0 sudo[67050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:18 compute-0 python3.9[67052]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:48:18 compute-0 sudo[67050]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:18 compute-0 sudo[67173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbrvmwzkwjepunwbelcqxclkkeiveany ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957297.675034-437-270657438241689/AnsiballZ_copy.py'
Feb 01 14:48:18 compute-0 sudo[67173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:18 compute-0 python3.9[67175]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957297.675034-437-270657438241689/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:48:18 compute-0 sudo[67173]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:19 compute-0 sshd-session[67176]: Connection closed by 3.82.130.45 port 38402 [preauth]
Feb 01 14:48:19 compute-0 sudo[67327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smwonwwkgatzmrwcdnifkulsdfzdorxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957299.024535-452-265133827307874/AnsiballZ_file.py'
Feb 01 14:48:19 compute-0 sudo[67327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:19 compute-0 python3.9[67329]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:48:19 compute-0 sudo[67327]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:19 compute-0 sudo[67479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzwpdzpmxqterwmitrddwxgskljcnpof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957299.600191-460-201069812060415/AnsiballZ_command.py'
Feb 01 14:48:19 compute-0 sudo[67479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:20 compute-0 python3.9[67481]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:48:20 compute-0 sudo[67479]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:20 compute-0 sudo[67638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzgbfugbnuecalufobvsyvzmzmiepktg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957300.268013-468-47666947062384/AnsiballZ_blockinfile.py'
Feb 01 14:48:20 compute-0 sudo[67638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:20 compute-0 python3.9[67640]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:48:20 compute-0 sudo[67638]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:21 compute-0 sudo[67791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucvvigijtsblhqnffzluoxdrqtoyiego ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957301.0838966-477-247463806699076/AnsiballZ_file.py'
Feb 01 14:48:21 compute-0 sudo[67791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:21 compute-0 python3.9[67793]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:48:21 compute-0 sudo[67791]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:21 compute-0 sudo[67943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwburyoluvaljtatlbusegqqkqivxoww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957301.648029-477-58154205020517/AnsiballZ_file.py'
Feb 01 14:48:21 compute-0 sudo[67943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:21 compute-0 python3.9[67945]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:48:22 compute-0 sudo[67943]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:22 compute-0 sudo[68095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsugxbrtwfjnqopcudxiwbftcivvlphq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957302.199309-492-6089175194665/AnsiballZ_mount.py'
Feb 01 14:48:22 compute-0 sudo[68095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:22 compute-0 python3.9[68097]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 01 14:48:22 compute-0 sudo[68095]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:23 compute-0 sudo[68248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faskisjkpqwgmdswerkfmsgshbtplhvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957303.0165524-492-228619722208928/AnsiballZ_mount.py'
Feb 01 14:48:23 compute-0 sudo[68248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:23 compute-0 python3.9[68250]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 01 14:48:23 compute-0 sudo[68248]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:23 compute-0 sshd-session[59046]: Connection closed by 192.168.122.30 port 35200
Feb 01 14:48:23 compute-0 sshd-session[59043]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:48:23 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Feb 01 14:48:23 compute-0 systemd[1]: session-13.scope: Consumed 31.270s CPU time.
Feb 01 14:48:23 compute-0 systemd-logind[786]: Session 13 logged out. Waiting for processes to exit.
Feb 01 14:48:23 compute-0 systemd-logind[786]: Removed session 13.
Feb 01 14:48:29 compute-0 sshd-session[68276]: Accepted publickey for zuul from 192.168.122.30 port 43088 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:48:29 compute-0 systemd-logind[786]: New session 14 of user zuul.
Feb 01 14:48:29 compute-0 systemd[1]: Started Session 14 of User zuul.
Feb 01 14:48:29 compute-0 sshd-session[68276]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:48:30 compute-0 sudo[68429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssfturhvkeliroizieypzozxgxnmwppl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957309.5725496-16-56954077195319/AnsiballZ_tempfile.py'
Feb 01 14:48:30 compute-0 sudo[68429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:30 compute-0 python3.9[68431]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Feb 01 14:48:30 compute-0 sudo[68429]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:30 compute-0 sudo[68581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqgtdpxlomxbatxxlxboxvodhzywfeuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957310.3955736-28-227440472371279/AnsiballZ_stat.py'
Feb 01 14:48:30 compute-0 sudo[68581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:30 compute-0 python3.9[68583]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:48:30 compute-0 sudo[68581]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:31 compute-0 sudo[68733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmtvaeiqejpmojhtecvjpdbqhsbexazq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957311.1427603-38-198956590282672/AnsiballZ_setup.py'
Feb 01 14:48:31 compute-0 sudo[68733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:31 compute-0 python3.9[68735]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:48:31 compute-0 sudo[68733]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:32 compute-0 sudo[68885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaryzhvskndsscqlrsfnemlogprsbbqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957312.1934721-47-199717319944288/AnsiballZ_blockinfile.py'
Feb 01 14:48:32 compute-0 sudo[68885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:32 compute-0 python3.9[68887]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCc91AYQnCiB0gaeezmTYoTbrfn13wkohxC7DIARmFIxyirGt426V9bgiFFpczr0aG/jVGnrXyqspzqVB5qhL9auJ/zaBQu1HuEMj/iSqvtp/5CDZvoCsolbRvc44zq2YNqAjmlgPQKe2f5MpaLGuLQIttz10Aj01eq50uvoj+Hccu0tBH2HrkQ6PphB9SaLI0ycAPr4B4WyPj9bCzJA9VYlxP6l4qkBqQjSDZLHnNDZP7N8pB38yfZB4EeE9v/ooH5aVJpDjV0Ciwtv4zQTv2W/HjYxaR9DsoVdVzUJKnzBZXW+kb2vE/A6rxP/+raWm+Z4jwydT2ZGCcAPe024SW6OUhi434WMJg15As435pj6vNzkfhYX2vPuIZed9Rue7qlD9kPRcg71YkvhFlja7MORqf5+fQtCfHTz9OakK3VATcSgFt4cP8UrBn+vqksDnD16t+njeWjWiJ84mM9yrOXBZblouKVTgDAkKsj+6dVItGIfTdsgn1Xo3eDknUU3Qk=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM5PgjrlIGkEPCJJDOYu9tmd12o/4td87MoNHh6uIuRZ
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNnAPVuUouOEBJ57nPy2aB3GgfV4SpHa2H6A23QhOI4mJOPaen6XNPSxMMgeo9r5YMVaTTaE35iZ3Xh9PT0kwJ4=
                                             create=True mode=0644 path=/tmp/ansible.3jo0zcqm state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:48:32 compute-0 sudo[68885]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:33 compute-0 sudo[69037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knyyjjipyyrdhbkskbrrmcjcuzoiustq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957312.9139853-55-254770130431648/AnsiballZ_command.py'
Feb 01 14:48:33 compute-0 sudo[69037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:33 compute-0 python3.9[69039]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.3jo0zcqm' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:48:33 compute-0 sudo[69037]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:33 compute-0 sudo[69191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqaiuvwdoqoulhyodllyynnxjqrkubap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957313.551029-63-9678412674272/AnsiballZ_file.py'
Feb 01 14:48:33 compute-0 sudo[69191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:34 compute-0 python3.9[69193]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.3jo0zcqm state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:48:34 compute-0 sudo[69191]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:34 compute-0 sshd-session[68279]: Connection closed by 192.168.122.30 port 43088
Feb 01 14:48:34 compute-0 sshd-session[68276]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:48:34 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Feb 01 14:48:34 compute-0 systemd[1]: session-14.scope: Consumed 2.885s CPU time.
Feb 01 14:48:34 compute-0 systemd-logind[786]: Session 14 logged out. Waiting for processes to exit.
Feb 01 14:48:34 compute-0 systemd-logind[786]: Removed session 14.
Feb 01 14:48:36 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 01 14:48:39 compute-0 sshd-session[69220]: Accepted publickey for zuul from 192.168.122.30 port 51648 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:48:39 compute-0 systemd-logind[786]: New session 15 of user zuul.
Feb 01 14:48:39 compute-0 systemd[1]: Started Session 15 of User zuul.
Feb 01 14:48:39 compute-0 sshd-session[69220]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:48:40 compute-0 python3.9[69373]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:48:41 compute-0 sudo[69527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttxbvoijfzcqxfovvcjriywulkgovubf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957320.668198-27-132605126452481/AnsiballZ_systemd.py'
Feb 01 14:48:41 compute-0 sudo[69527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:41 compute-0 python3.9[69529]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 01 14:48:41 compute-0 sudo[69527]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:41 compute-0 sudo[69681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxdsvuzwgqlazbywsiafytzjfmzkbitb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957321.6598465-35-129545004406112/AnsiballZ_systemd.py'
Feb 01 14:48:41 compute-0 sudo[69681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:42 compute-0 python3.9[69683]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 01 14:48:42 compute-0 sudo[69681]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:42 compute-0 sudo[69834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzbpstnuudbaasvkuabitnsvkqssablj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957322.4144025-44-64552045028716/AnsiballZ_command.py'
Feb 01 14:48:42 compute-0 sudo[69834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:42 compute-0 python3.9[69836]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:48:42 compute-0 sudo[69834]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:43 compute-0 sudo[69987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfqdjaoxgfutpwtyzmdtddvsgpafnkju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957323.1053267-52-23982701052657/AnsiballZ_stat.py'
Feb 01 14:48:43 compute-0 sudo[69987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:43 compute-0 python3.9[69989]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:48:43 compute-0 sudo[69987]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:44 compute-0 sudo[70141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfewrmjccabxfaczwprkmxadaaqbhekz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957323.8629458-60-84396998799536/AnsiballZ_command.py'
Feb 01 14:48:44 compute-0 sudo[70141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:44 compute-0 python3.9[70143]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:48:44 compute-0 sudo[70141]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:44 compute-0 sudo[70296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwcjdrhhgkmlnkhekzuyymazafhqrpdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957324.393335-68-27454663395571/AnsiballZ_file.py'
Feb 01 14:48:44 compute-0 sudo[70296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:45 compute-0 python3.9[70298]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:48:45 compute-0 sudo[70296]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:45 compute-0 sshd-session[69223]: Connection closed by 192.168.122.30 port 51648
Feb 01 14:48:45 compute-0 sshd-session[69220]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:48:45 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Feb 01 14:48:45 compute-0 systemd[1]: session-15.scope: Consumed 4.050s CPU time.
Feb 01 14:48:45 compute-0 systemd-logind[786]: Session 15 logged out. Waiting for processes to exit.
Feb 01 14:48:45 compute-0 systemd-logind[786]: Removed session 15.
Feb 01 14:48:50 compute-0 sshd-session[70323]: Accepted publickey for zuul from 192.168.122.30 port 56024 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:48:50 compute-0 systemd-logind[786]: New session 16 of user zuul.
Feb 01 14:48:50 compute-0 systemd[1]: Started Session 16 of User zuul.
Feb 01 14:48:50 compute-0 sshd-session[70323]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:48:51 compute-0 python3.9[70476]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:48:52 compute-0 sudo[70630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpijnmyporjfuubwmevnpyqtxkfuvlpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957331.8757799-29-119406011068245/AnsiballZ_setup.py'
Feb 01 14:48:52 compute-0 sudo[70630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:52 compute-0 python3.9[70632]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 01 14:48:52 compute-0 sudo[70630]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:53 compute-0 sudo[70714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdgbncpxzqwyvixjtahwpjnuhdootntc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957331.8757799-29-119406011068245/AnsiballZ_dnf.py'
Feb 01 14:48:53 compute-0 sudo[70714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:48:53 compute-0 python3.9[70716]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 01 14:48:54 compute-0 sudo[70714]: pam_unix(sudo:session): session closed for user root
Feb 01 14:48:55 compute-0 python3.9[70867]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:48:56 compute-0 python3.9[71018]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 01 14:48:57 compute-0 python3.9[71168]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:48:57 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 01 14:48:57 compute-0 python3.9[71319]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:48:58 compute-0 sshd-session[70326]: Connection closed by 192.168.122.30 port 56024
Feb 01 14:48:58 compute-0 sshd-session[70323]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:48:58 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Feb 01 14:48:58 compute-0 systemd[1]: session-16.scope: Consumed 5.370s CPU time.
Feb 01 14:48:58 compute-0 systemd-logind[786]: Session 16 logged out. Waiting for processes to exit.
Feb 01 14:48:58 compute-0 systemd-logind[786]: Removed session 16.
Feb 01 14:49:04 compute-0 sshd-session[71344]: Accepted publickey for zuul from 38.102.83.245 port 41614 ssh2: RSA SHA256:ukhXxVC8oCSeSO9VQn4ZNf7JkO/cu/icAewGEjIjPv8
Feb 01 14:49:04 compute-0 systemd-logind[786]: New session 17 of user zuul.
Feb 01 14:49:04 compute-0 systemd[1]: Started Session 17 of User zuul.
Feb 01 14:49:04 compute-0 sshd-session[71344]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:49:05 compute-0 sudo[71420]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcwrpzkmefoqajnflvpcbwrnixoodxec ; /usr/bin/python3'
Feb 01 14:49:05 compute-0 sudo[71420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:05 compute-0 useradd[71424]: new group: name=ceph-admin, GID=42478
Feb 01 14:49:05 compute-0 useradd[71424]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Feb 01 14:49:05 compute-0 sudo[71420]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:05 compute-0 sudo[71506]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqvthorhisyimloaroendcckiyshfuvo ; /usr/bin/python3'
Feb 01 14:49:05 compute-0 sudo[71506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:05 compute-0 sudo[71506]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:05 compute-0 sudo[71579]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-johryyxzukdrzyutwbopawcfwjfhmifs ; /usr/bin/python3'
Feb 01 14:49:05 compute-0 sudo[71579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:06 compute-0 sudo[71579]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:06 compute-0 sudo[71629]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-karuhvoqqsyaqwztcqalxpnvpqtqjolv ; /usr/bin/python3'
Feb 01 14:49:06 compute-0 sudo[71629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:06 compute-0 sudo[71629]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:06 compute-0 sudo[71655]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhcekckzjcercpxkpcyjulvojpgcxzie ; /usr/bin/python3'
Feb 01 14:49:06 compute-0 sudo[71655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:06 compute-0 sudo[71655]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:06 compute-0 sudo[71681]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alnnsnshaaoajffpwyyhfprzrwmuakrd ; /usr/bin/python3'
Feb 01 14:49:06 compute-0 sudo[71681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:07 compute-0 sudo[71681]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:07 compute-0 sudo[71707]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xszqvraoaywqaxfjoxqaemxehylboobh ; /usr/bin/python3'
Feb 01 14:49:07 compute-0 sudo[71707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:07 compute-0 sudo[71707]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:07 compute-0 sudo[71785]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyfhkwhytxgsdzlymxjbowjskksszclx ; /usr/bin/python3'
Feb 01 14:49:07 compute-0 sudo[71785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:07 compute-0 sudo[71785]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:08 compute-0 sudo[71858]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqzhtfojladaqnenxewlnhjphwjogutg ; /usr/bin/python3'
Feb 01 14:49:08 compute-0 sudo[71858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:08 compute-0 sudo[71858]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:08 compute-0 sudo[71960]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okcjhmotusgjboifngkcczyjygnnwujn ; /usr/bin/python3'
Feb 01 14:49:08 compute-0 sudo[71960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:08 compute-0 sudo[71960]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:08 compute-0 sudo[72033]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okkcxjgqlyyvkzcvvrvnugcaqywvmvqc ; /usr/bin/python3'
Feb 01 14:49:08 compute-0 sudo[72033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:09 compute-0 sudo[72033]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:09 compute-0 sudo[72083]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvxfufczypeudqluaqgmksrbtivokjzh ; /usr/bin/python3'
Feb 01 14:49:09 compute-0 sudo[72083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:09 compute-0 python3[72085]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:49:10 compute-0 sudo[72083]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:10 compute-0 sudo[72178]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faywougccutgxdfykqgrdtorwjxwraub ; /usr/bin/python3'
Feb 01 14:49:10 compute-0 sudo[72178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:11 compute-0 python3[72180]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 01 14:49:12 compute-0 sudo[72178]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:12 compute-0 sudo[72205]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gybgiudnbflyngyivpxemcinhcrtxqeh ; /usr/bin/python3'
Feb 01 14:49:12 compute-0 sudo[72205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:12 compute-0 python3[72207]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 01 14:49:12 compute-0 sudo[72205]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:12 compute-0 sudo[72231]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gycvtfefzjiduaxdghanxldsozwvqkbi ; /usr/bin/python3'
Feb 01 14:49:12 compute-0 sudo[72231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:12 compute-0 python3[72233]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:49:12 compute-0 kernel: loop: module loaded
Feb 01 14:49:12 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Feb 01 14:49:12 compute-0 sudo[72231]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:12 compute-0 sudo[72266]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byyemcovteknbwmivigitnmymappiult ; /usr/bin/python3'
Feb 01 14:49:12 compute-0 sudo[72266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:13 compute-0 python3[72268]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:49:13 compute-0 lvm[72271]: PV /dev/loop3 not used.
Feb 01 14:49:13 compute-0 lvm[72280]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 14:49:13 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Feb 01 14:49:13 compute-0 lvm[72282]:   1 logical volume(s) in volume group "ceph_vg0" now active
Feb 01 14:49:13 compute-0 sudo[72266]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:13 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Feb 01 14:49:13 compute-0 sudo[72358]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdbqgleufeybtmsgcoktaumivgalhckf ; /usr/bin/python3'
Feb 01 14:49:13 compute-0 sudo[72358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:13 compute-0 python3[72360]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:49:13 compute-0 sudo[72358]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:13 compute-0 sudo[72431]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzwidirlhipzhmxwbbojthkbixlfjdnd ; /usr/bin/python3'
Feb 01 14:49:13 compute-0 sudo[72431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:14 compute-0 python3[72433]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769957353.4675407-36131-147630417988920/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:49:14 compute-0 sudo[72431]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:14 compute-0 sudo[72481]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rivnxvgwofrnihhtfffcrhysejbcnesc ; /usr/bin/python3'
Feb 01 14:49:14 compute-0 sudo[72481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:14 compute-0 python3[72483]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 14:49:14 compute-0 systemd[1]: Reloading.
Feb 01 14:49:14 compute-0 systemd-sysv-generator[72510]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:49:14 compute-0 systemd-rc-local-generator[72507]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:49:14 compute-0 systemd[1]: Starting Ceph OSD losetup...
Feb 01 14:49:14 compute-0 bash[72523]: /dev/loop3: [64513]:4329562 (/var/lib/ceph-osd-0.img)
Feb 01 14:49:14 compute-0 systemd[1]: Finished Ceph OSD losetup.
Feb 01 14:49:14 compute-0 lvm[72524]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 14:49:14 compute-0 lvm[72524]: VG ceph_vg0 finished
Feb 01 14:49:15 compute-0 sudo[72481]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:15 compute-0 sudo[72548]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrtbsjnlpajwzquqlltqyxtikxpdfjon ; /usr/bin/python3'
Feb 01 14:49:15 compute-0 sudo[72548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:15 compute-0 python3[72550]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 01 14:49:16 compute-0 sudo[72548]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:16 compute-0 sudo[72575]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghtqlesnxuxlrimewdxituoohgaimqga ; /usr/bin/python3'
Feb 01 14:49:16 compute-0 sudo[72575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:16 compute-0 python3[72577]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 01 14:49:16 compute-0 sudo[72575]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:16 compute-0 sudo[72601]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkbscmqsssgcdkxpzdblwpawjhsokziv ; /usr/bin/python3'
Feb 01 14:49:16 compute-0 sudo[72601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:16 compute-0 python3[72603]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:49:16 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Feb 01 14:49:16 compute-0 sudo[72601]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:17 compute-0 sudo[72633]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uszonhsxgywqioqwrjcuuoqqnthxevsa ; /usr/bin/python3'
Feb 01 14:49:17 compute-0 sudo[72633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:17 compute-0 python3[72635]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:49:17 compute-0 lvm[72638]: PV /dev/loop4 not used.
Feb 01 14:49:17 compute-0 lvm[72648]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 14:49:17 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Feb 01 14:49:17 compute-0 lvm[72650]:   1 logical volume(s) in volume group "ceph_vg1" now active
Feb 01 14:49:17 compute-0 sudo[72633]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:17 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Feb 01 14:49:17 compute-0 sudo[72726]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywhqvoglptzpvinknfbzolatflhsqxvm ; /usr/bin/python3'
Feb 01 14:49:17 compute-0 sudo[72726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:17 compute-0 python3[72728]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:49:17 compute-0 sudo[72726]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:18 compute-0 sudo[72799]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrmjpwgfhkfpdpbduxxrbjhyrwzhlbhu ; /usr/bin/python3'
Feb 01 14:49:18 compute-0 sudo[72799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:18 compute-0 python3[72801]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769957357.6285377-36158-254530882717440/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:49:18 compute-0 sudo[72799]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:18 compute-0 sudo[72849]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfgfqxfbhrwmhxiymrqzbzprawyyijzg ; /usr/bin/python3'
Feb 01 14:49:18 compute-0 sudo[72849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:18 compute-0 python3[72851]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 14:49:18 compute-0 systemd[1]: Reloading.
Feb 01 14:49:18 compute-0 systemd-rc-local-generator[72881]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:49:18 compute-0 systemd-sysv-generator[72884]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:49:18 compute-0 systemd[1]: Starting Ceph OSD losetup...
Feb 01 14:49:18 compute-0 bash[72891]: /dev/loop4: [64513]:4356750 (/var/lib/ceph-osd-1.img)
Feb 01 14:49:18 compute-0 systemd[1]: Finished Ceph OSD losetup.
Feb 01 14:49:18 compute-0 lvm[72892]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 14:49:18 compute-0 lvm[72892]: VG ceph_vg1 finished
Feb 01 14:49:18 compute-0 sudo[72849]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:19 compute-0 sudo[72916]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unwuntrsxojjokadyvfpewpyserkxjzo ; /usr/bin/python3'
Feb 01 14:49:19 compute-0 sudo[72916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:19 compute-0 python3[72918]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 01 14:49:20 compute-0 sudo[72916]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:20 compute-0 sudo[72943]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jadwfnwmjnuiieqmwkgxwlsyquozyfey ; /usr/bin/python3'
Feb 01 14:49:20 compute-0 sudo[72943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:20 compute-0 python3[72945]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 01 14:49:20 compute-0 sudo[72943]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:20 compute-0 sudo[72969]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lshlyydojaahkajodoxxvphrekayyodu ; /usr/bin/python3'
Feb 01 14:49:20 compute-0 sudo[72969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:21 compute-0 python3[72971]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:49:21 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Feb 01 14:49:21 compute-0 sudo[72969]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:21 compute-0 sudo[73001]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyulnysxqnaklsqaciygsjyylhgxiooh ; /usr/bin/python3'
Feb 01 14:49:21 compute-0 sudo[73001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:21 compute-0 python3[73003]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:49:21 compute-0 lvm[73006]: PV /dev/loop5 not used.
Feb 01 14:49:21 compute-0 lvm[73016]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 14:49:21 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Feb 01 14:49:21 compute-0 lvm[73018]:   1 logical volume(s) in volume group "ceph_vg2" now active
Feb 01 14:49:21 compute-0 sudo[73001]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:21 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Feb 01 14:49:21 compute-0 sudo[73094]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzotrtzejnekygyecugcijumsxctkdlg ; /usr/bin/python3'
Feb 01 14:49:21 compute-0 sudo[73094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:21 compute-0 python3[73096]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:49:21 compute-0 sudo[73094]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:22 compute-0 sudo[73167]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezqxhicepbtfnfyabfqatvdtbcacuxxo ; /usr/bin/python3'
Feb 01 14:49:22 compute-0 sudo[73167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:22 compute-0 python3[73169]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769957361.7326322-36185-123441704867521/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:49:22 compute-0 sudo[73167]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:22 compute-0 sudo[73217]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktlqqqmctxwiezfulwwdrisffydakmtk ; /usr/bin/python3'
Feb 01 14:49:22 compute-0 sudo[73217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:22 compute-0 python3[73219]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 14:49:22 compute-0 systemd[1]: Reloading.
Feb 01 14:49:22 compute-0 systemd-rc-local-generator[73248]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:49:22 compute-0 systemd-sysv-generator[73252]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:49:23 compute-0 systemd[1]: Starting Ceph OSD losetup...
Feb 01 14:49:23 compute-0 bash[73259]: /dev/loop5: [64513]:4356753 (/var/lib/ceph-osd-2.img)
Feb 01 14:49:23 compute-0 systemd[1]: Finished Ceph OSD losetup.
Feb 01 14:49:23 compute-0 lvm[73260]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 14:49:23 compute-0 lvm[73260]: VG ceph_vg2 finished
Feb 01 14:49:23 compute-0 sudo[73217]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:24 compute-0 python3[73284]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:49:26 compute-0 sudo[73375]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcxevgwqugsozolxpmpfaoubfmcnislz ; /usr/bin/python3'
Feb 01 14:49:26 compute-0 sudo[73375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:26 compute-0 python3[73377]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 01 14:49:28 compute-0 sudo[73375]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:28 compute-0 sudo[73433]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzkdgaeonoxcuwgwhpzqrkbcccldqdku ; /usr/bin/python3'
Feb 01 14:49:28 compute-0 sudo[73433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:29 compute-0 python3[73435]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 01 14:49:31 compute-0 groupadd[73445]: group added to /etc/group: name=cephadm, GID=993
Feb 01 14:49:31 compute-0 groupadd[73445]: group added to /etc/gshadow: name=cephadm
Feb 01 14:49:31 compute-0 groupadd[73445]: new group: name=cephadm, GID=993
Feb 01 14:49:31 compute-0 useradd[73452]: new user: name=cephadm, UID=992, GID=993, home=/var/lib/cephadm, shell=/bin/bash, from=none
Feb 01 14:49:31 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 01 14:49:31 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 01 14:49:31 compute-0 sudo[73433]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:31 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 01 14:49:31 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 01 14:49:31 compute-0 systemd[1]: run-r7d3447a9278746b3b4366efa4a157989.service: Deactivated successfully.
Feb 01 14:49:31 compute-0 sudo[73552]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhwntmoifitgzgvmoqdltvdtonepqutm ; /usr/bin/python3'
Feb 01 14:49:31 compute-0 sudo[73552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:32 compute-0 python3[73554]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 01 14:49:32 compute-0 sudo[73552]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:32 compute-0 sudo[73580]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvgeofrtwjntipxoarvrixreaknsxzkw ; /usr/bin/python3'
Feb 01 14:49:32 compute-0 sudo[73580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:32 compute-0 python3[73582]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:49:32 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 01 14:49:32 compute-0 sudo[73580]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:33 compute-0 sudo[73620]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgzizvqlppoisqhewghdnfoduuntgxrj ; /usr/bin/python3'
Feb 01 14:49:33 compute-0 sudo[73620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:33 compute-0 python3[73622]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:49:33 compute-0 sudo[73620]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:33 compute-0 sudo[73646]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvbxwzzzthhxzfaryzjuewpontuszgod ; /usr/bin/python3'
Feb 01 14:49:33 compute-0 sudo[73646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:33 compute-0 python3[73648]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:49:33 compute-0 sudo[73646]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:34 compute-0 sudo[73724]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcsbybjzydpkqbhhqzmdhciovzjxluiy ; /usr/bin/python3'
Feb 01 14:49:34 compute-0 sudo[73724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:34 compute-0 python3[73726]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:49:34 compute-0 sudo[73724]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:34 compute-0 sudo[73797]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kttgammnwuskwhhfoakzpwlbxkqqadwn ; /usr/bin/python3'
Feb 01 14:49:34 compute-0 sudo[73797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:34 compute-0 python3[73799]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769957373.9692478-36334-30023732298472/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:49:34 compute-0 sudo[73797]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:35 compute-0 sudo[73899]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obvkxcwrskwphzpmoguvilvvuyansfan ; /usr/bin/python3'
Feb 01 14:49:35 compute-0 sudo[73899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:35 compute-0 python3[73901]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:49:35 compute-0 sudo[73899]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:35 compute-0 sudo[73972]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnmyodlepfmyvcjnqablqimkalqokfiy ; /usr/bin/python3'
Feb 01 14:49:35 compute-0 sudo[73972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:35 compute-0 python3[73974]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769957375.049498-36352-64714602929366/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:49:35 compute-0 sudo[73972]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:35 compute-0 sudo[74022]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxgryeuposazvdqcuhmlomsrugjahmhy ; /usr/bin/python3'
Feb 01 14:49:35 compute-0 sudo[74022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:36 compute-0 python3[74024]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 01 14:49:36 compute-0 sudo[74022]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:36 compute-0 sudo[74050]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnhosuousouknlgygidyjwmtgfndhfgz ; /usr/bin/python3'
Feb 01 14:49:36 compute-0 sudo[74050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:36 compute-0 chronyd[58562]: Selected source 198.50.174.203 (pool.ntp.org)
Feb 01 14:49:36 compute-0 python3[74052]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 01 14:49:36 compute-0 sudo[74050]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:36 compute-0 sudo[74078]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhnywaupjcdbfeboigauyxcekbiwfavx ; /usr/bin/python3'
Feb 01 14:49:36 compute-0 sudo[74078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:36 compute-0 python3[74080]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 01 14:49:36 compute-0 sudo[74078]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:36 compute-0 python3[74106]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 01 14:49:37 compute-0 sudo[74130]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wljynclvypjzghfowtzoonxyrkjathxl ; /usr/bin/python3'
Feb 01 14:49:37 compute-0 sudo[74130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:49:37 compute-0 python3[74132]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:49:37 compute-0 sshd-session[74136]: Accepted publickey for ceph-admin from 192.168.122.100 port 38922 ssh2: RSA SHA256:bmFcrL+FkRxi0Y8nv16OtHztKzEgseijvyIvMlraUdY
Feb 01 14:49:37 compute-0 systemd-logind[786]: New session 18 of user ceph-admin.
Feb 01 14:49:37 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Feb 01 14:49:37 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Feb 01 14:49:37 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Feb 01 14:49:37 compute-0 systemd[1]: Starting User Manager for UID 42477...
Feb 01 14:49:37 compute-0 systemd[74140]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 01 14:49:37 compute-0 systemd[74140]: Queued start job for default target Main User Target.
Feb 01 14:49:37 compute-0 systemd[74140]: Created slice User Application Slice.
Feb 01 14:49:37 compute-0 systemd[74140]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 01 14:49:37 compute-0 systemd[74140]: Started Daily Cleanup of User's Temporary Directories.
Feb 01 14:49:37 compute-0 systemd[74140]: Reached target Paths.
Feb 01 14:49:37 compute-0 systemd[74140]: Reached target Timers.
Feb 01 14:49:37 compute-0 systemd[74140]: Starting D-Bus User Message Bus Socket...
Feb 01 14:49:37 compute-0 systemd[74140]: Starting Create User's Volatile Files and Directories...
Feb 01 14:49:37 compute-0 systemd[74140]: Listening on D-Bus User Message Bus Socket.
Feb 01 14:49:37 compute-0 systemd[74140]: Reached target Sockets.
Feb 01 14:49:37 compute-0 systemd[74140]: Finished Create User's Volatile Files and Directories.
Feb 01 14:49:37 compute-0 systemd[74140]: Reached target Basic System.
Feb 01 14:49:37 compute-0 systemd[74140]: Reached target Main User Target.
Feb 01 14:49:37 compute-0 systemd[74140]: Startup finished in 124ms.
Feb 01 14:49:37 compute-0 systemd[1]: Started User Manager for UID 42477.
Feb 01 14:49:37 compute-0 systemd[1]: Started Session 18 of User ceph-admin.
Feb 01 14:49:37 compute-0 sshd-session[74136]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 01 14:49:37 compute-0 sudo[74156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Feb 01 14:49:37 compute-0 sudo[74156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:49:37 compute-0 sudo[74156]: pam_unix(sudo:session): session closed for user root
Feb 01 14:49:37 compute-0 sshd-session[74155]: Received disconnect from 192.168.122.100 port 38922:11: disconnected by user
Feb 01 14:49:37 compute-0 sshd-session[74155]: Disconnected from user ceph-admin 192.168.122.100 port 38922
Feb 01 14:49:37 compute-0 sshd-session[74136]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 01 14:49:37 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Feb 01 14:49:37 compute-0 systemd-logind[786]: Session 18 logged out. Waiting for processes to exit.
Feb 01 14:49:37 compute-0 systemd-logind[786]: Removed session 18.
Feb 01 14:49:37 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 01 14:49:38 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 01 14:49:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat4250495151-merged.mount: Deactivated successfully.
Feb 01 14:49:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat4250495151-lower\x2dmapped.mount: Deactivated successfully.
Feb 01 14:49:48 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Feb 01 14:49:48 compute-0 systemd[74140]: Activating special unit Exit the Session...
Feb 01 14:49:48 compute-0 systemd[74140]: Stopped target Main User Target.
Feb 01 14:49:48 compute-0 systemd[74140]: Stopped target Basic System.
Feb 01 14:49:48 compute-0 systemd[74140]: Stopped target Paths.
Feb 01 14:49:48 compute-0 systemd[74140]: Stopped target Sockets.
Feb 01 14:49:48 compute-0 systemd[74140]: Stopped target Timers.
Feb 01 14:49:48 compute-0 systemd[74140]: Stopped Mark boot as successful after the user session has run 2 minutes.
Feb 01 14:49:48 compute-0 systemd[74140]: Stopped Daily Cleanup of User's Temporary Directories.
Feb 01 14:49:48 compute-0 systemd[74140]: Closed D-Bus User Message Bus Socket.
Feb 01 14:49:48 compute-0 systemd[74140]: Stopped Create User's Volatile Files and Directories.
Feb 01 14:49:48 compute-0 systemd[74140]: Removed slice User Application Slice.
Feb 01 14:49:48 compute-0 systemd[74140]: Reached target Shutdown.
Feb 01 14:49:48 compute-0 systemd[74140]: Finished Exit the Session.
Feb 01 14:49:48 compute-0 systemd[74140]: Reached target Exit the Session.
Feb 01 14:49:48 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Feb 01 14:49:48 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Feb 01 14:49:48 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Feb 01 14:49:48 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Feb 01 14:49:48 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Feb 01 14:49:48 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Feb 01 14:49:48 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Feb 01 14:49:54 compute-0 podman[74233]: 2026-02-01 14:49:54.931854271 +0000 UTC m=+16.758031538 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:49:54 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 01 14:49:54 compute-0 podman[74292]: 2026-02-01 14:49:54.978008006 +0000 UTC m=+0.031479711 container create 8dbb988abafd0b52c0b8cb3f08332c31cad81750d6cddc51d540f44283cb87c9 (image=quay.io/ceph/ceph:v20, name=infallible_einstein, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:49:55 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Feb 01 14:49:55 compute-0 systemd[1]: Started libpod-conmon-8dbb988abafd0b52c0b8cb3f08332c31cad81750d6cddc51d540f44283cb87c9.scope.
Feb 01 14:49:55 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:49:55 compute-0 podman[74292]: 2026-02-01 14:49:54.964881435 +0000 UTC m=+0.018353150 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:49:55 compute-0 podman[74292]: 2026-02-01 14:49:55.065775439 +0000 UTC m=+0.119247184 container init 8dbb988abafd0b52c0b8cb3f08332c31cad81750d6cddc51d540f44283cb87c9 (image=quay.io/ceph/ceph:v20, name=infallible_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:49:55 compute-0 podman[74292]: 2026-02-01 14:49:55.070976337 +0000 UTC m=+0.124448092 container start 8dbb988abafd0b52c0b8cb3f08332c31cad81750d6cddc51d540f44283cb87c9 (image=quay.io/ceph/ceph:v20, name=infallible_einstein, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:49:55 compute-0 podman[74292]: 2026-02-01 14:49:55.075221677 +0000 UTC m=+0.128693462 container attach 8dbb988abafd0b52c0b8cb3f08332c31cad81750d6cddc51d540f44283cb87c9 (image=quay.io/ceph/ceph:v20, name=infallible_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Feb 01 14:49:55 compute-0 infallible_einstein[74308]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Feb 01 14:49:55 compute-0 systemd[1]: libpod-8dbb988abafd0b52c0b8cb3f08332c31cad81750d6cddc51d540f44283cb87c9.scope: Deactivated successfully.
Feb 01 14:49:55 compute-0 podman[74313]: 2026-02-01 14:49:55.220544759 +0000 UTC m=+0.024803043 container died 8dbb988abafd0b52c0b8cb3f08332c31cad81750d6cddc51d540f44283cb87c9 (image=quay.io/ceph/ceph:v20, name=infallible_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 01 14:49:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-15c1e7f9befd4c74060c625f7a99436fe58d38b367e3c4e55aca45ece74faa65-merged.mount: Deactivated successfully.
Feb 01 14:49:55 compute-0 podman[74313]: 2026-02-01 14:49:55.25698634 +0000 UTC m=+0.061244614 container remove 8dbb988abafd0b52c0b8cb3f08332c31cad81750d6cddc51d540f44283cb87c9 (image=quay.io/ceph/ceph:v20, name=infallible_einstein, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:49:55 compute-0 systemd[1]: libpod-conmon-8dbb988abafd0b52c0b8cb3f08332c31cad81750d6cddc51d540f44283cb87c9.scope: Deactivated successfully.
Feb 01 14:49:55 compute-0 podman[74328]: 2026-02-01 14:49:55.323254155 +0000 UTC m=+0.045302993 container create f9af25cab22a73c5ca1a9bbb907183c7f06a39e5bd9490170b2366ba6a36ac13 (image=quay.io/ceph/ceph:v20, name=jolly_northcutt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 01 14:49:55 compute-0 systemd[1]: Started libpod-conmon-f9af25cab22a73c5ca1a9bbb907183c7f06a39e5bd9490170b2366ba6a36ac13.scope.
Feb 01 14:49:55 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:49:55 compute-0 podman[74328]: 2026-02-01 14:49:55.385491886 +0000 UTC m=+0.107540744 container init f9af25cab22a73c5ca1a9bbb907183c7f06a39e5bd9490170b2366ba6a36ac13 (image=quay.io/ceph/ceph:v20, name=jolly_northcutt, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:49:55 compute-0 podman[74328]: 2026-02-01 14:49:55.392390412 +0000 UTC m=+0.114439250 container start f9af25cab22a73c5ca1a9bbb907183c7f06a39e5bd9490170b2366ba6a36ac13 (image=quay.io/ceph/ceph:v20, name=jolly_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 01 14:49:55 compute-0 jolly_northcutt[74344]: 167 167
Feb 01 14:49:55 compute-0 systemd[1]: libpod-f9af25cab22a73c5ca1a9bbb907183c7f06a39e5bd9490170b2366ba6a36ac13.scope: Deactivated successfully.
Feb 01 14:49:55 compute-0 podman[74328]: 2026-02-01 14:49:55.396703074 +0000 UTC m=+0.118751962 container attach f9af25cab22a73c5ca1a9bbb907183c7f06a39e5bd9490170b2366ba6a36ac13 (image=quay.io/ceph/ceph:v20, name=jolly_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Feb 01 14:49:55 compute-0 podman[74328]: 2026-02-01 14:49:55.397158657 +0000 UTC m=+0.119207505 container died f9af25cab22a73c5ca1a9bbb907183c7f06a39e5bd9490170b2366ba6a36ac13 (image=quay.io/ceph/ceph:v20, name=jolly_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 01 14:49:55 compute-0 podman[74328]: 2026-02-01 14:49:55.304554426 +0000 UTC m=+0.026603264 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:49:55 compute-0 podman[74328]: 2026-02-01 14:49:55.431020855 +0000 UTC m=+0.153069663 container remove f9af25cab22a73c5ca1a9bbb907183c7f06a39e5bd9490170b2366ba6a36ac13 (image=quay.io/ceph/ceph:v20, name=jolly_northcutt, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 01 14:49:55 compute-0 systemd[1]: libpod-conmon-f9af25cab22a73c5ca1a9bbb907183c7f06a39e5bd9490170b2366ba6a36ac13.scope: Deactivated successfully.
Feb 01 14:49:55 compute-0 podman[74361]: 2026-02-01 14:49:55.480828194 +0000 UTC m=+0.036430632 container create 9df94e49e14b881877197913ecd2c53d1ca6b25d2630c39d77bacdae1fd108cc (image=quay.io/ceph/ceph:v20, name=competent_newton, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 01 14:49:55 compute-0 systemd[1]: Started libpod-conmon-9df94e49e14b881877197913ecd2c53d1ca6b25d2630c39d77bacdae1fd108cc.scope.
Feb 01 14:49:55 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:49:55 compute-0 podman[74361]: 2026-02-01 14:49:55.545923996 +0000 UTC m=+0.101526454 container init 9df94e49e14b881877197913ecd2c53d1ca6b25d2630c39d77bacdae1fd108cc (image=quay.io/ceph/ceph:v20, name=competent_newton, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:49:55 compute-0 podman[74361]: 2026-02-01 14:49:55.551148414 +0000 UTC m=+0.106750862 container start 9df94e49e14b881877197913ecd2c53d1ca6b25d2630c39d77bacdae1fd108cc (image=quay.io/ceph/ceph:v20, name=competent_newton, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 01 14:49:55 compute-0 podman[74361]: 2026-02-01 14:49:55.555153797 +0000 UTC m=+0.110756285 container attach 9df94e49e14b881877197913ecd2c53d1ca6b25d2630c39d77bacdae1fd108cc (image=quay.io/ceph/ceph:v20, name=competent_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 01 14:49:55 compute-0 podman[74361]: 2026-02-01 14:49:55.461442516 +0000 UTC m=+0.017044994 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:49:55 compute-0 competent_newton[74377]: AQATaH9piNQ8IhAAOrkahw461D5iBEXuZK7gdA==
Feb 01 14:49:55 compute-0 systemd[1]: libpod-9df94e49e14b881877197913ecd2c53d1ca6b25d2630c39d77bacdae1fd108cc.scope: Deactivated successfully.
Feb 01 14:49:55 compute-0 podman[74361]: 2026-02-01 14:49:55.578115457 +0000 UTC m=+0.133717905 container died 9df94e49e14b881877197913ecd2c53d1ca6b25d2630c39d77bacdae1fd108cc (image=quay.io/ceph/ceph:v20, name=competent_newton, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 01 14:49:55 compute-0 podman[74361]: 2026-02-01 14:49:55.609371202 +0000 UTC m=+0.164973660 container remove 9df94e49e14b881877197913ecd2c53d1ca6b25d2630c39d77bacdae1fd108cc (image=quay.io/ceph/ceph:v20, name=competent_newton, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 01 14:49:55 compute-0 systemd[1]: libpod-conmon-9df94e49e14b881877197913ecd2c53d1ca6b25d2630c39d77bacdae1fd108cc.scope: Deactivated successfully.
Feb 01 14:49:55 compute-0 podman[74397]: 2026-02-01 14:49:55.668511805 +0000 UTC m=+0.045004854 container create bc5c48ab2f9a2888bd3aac69862923b99c9d54780ed882ee6e6c3cd9213b86e1 (image=quay.io/ceph/ceph:v20, name=reverent_turing, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:49:55 compute-0 systemd[1]: Started libpod-conmon-bc5c48ab2f9a2888bd3aac69862923b99c9d54780ed882ee6e6c3cd9213b86e1.scope.
Feb 01 14:49:55 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:49:55 compute-0 podman[74397]: 2026-02-01 14:49:55.728131852 +0000 UTC m=+0.104624941 container init bc5c48ab2f9a2888bd3aac69862923b99c9d54780ed882ee6e6c3cd9213b86e1 (image=quay.io/ceph/ceph:v20, name=reverent_turing, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 01 14:49:55 compute-0 podman[74397]: 2026-02-01 14:49:55.732064073 +0000 UTC m=+0.108557152 container start bc5c48ab2f9a2888bd3aac69862923b99c9d54780ed882ee6e6c3cd9213b86e1 (image=quay.io/ceph/ceph:v20, name=reverent_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:49:55 compute-0 podman[74397]: 2026-02-01 14:49:55.736823188 +0000 UTC m=+0.113316267 container attach bc5c48ab2f9a2888bd3aac69862923b99c9d54780ed882ee6e6c3cd9213b86e1 (image=quay.io/ceph/ceph:v20, name=reverent_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 01 14:49:55 compute-0 podman[74397]: 2026-02-01 14:49:55.646231125 +0000 UTC m=+0.022724224 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:49:55 compute-0 reverent_turing[74413]: AQATaH9pJBF8LRAABIDa+8Sbw/MmLGIqYlu/JQ==
Feb 01 14:49:55 compute-0 systemd[1]: libpod-bc5c48ab2f9a2888bd3aac69862923b99c9d54780ed882ee6e6c3cd9213b86e1.scope: Deactivated successfully.
Feb 01 14:49:55 compute-0 podman[74397]: 2026-02-01 14:49:55.766462697 +0000 UTC m=+0.142955776 container died bc5c48ab2f9a2888bd3aac69862923b99c9d54780ed882ee6e6c3cd9213b86e1 (image=quay.io/ceph/ceph:v20, name=reverent_turing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 01 14:49:55 compute-0 podman[74397]: 2026-02-01 14:49:55.810553484 +0000 UTC m=+0.187046573 container remove bc5c48ab2f9a2888bd3aac69862923b99c9d54780ed882ee6e6c3cd9213b86e1 (image=quay.io/ceph/ceph:v20, name=reverent_turing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:49:55 compute-0 systemd[1]: libpod-conmon-bc5c48ab2f9a2888bd3aac69862923b99c9d54780ed882ee6e6c3cd9213b86e1.scope: Deactivated successfully.
Feb 01 14:49:55 compute-0 podman[74431]: 2026-02-01 14:49:55.873558807 +0000 UTC m=+0.049511502 container create 2e91fd9563fb20d6e7d18fe6154faf46adb7cd9e1f3f6b3fc277bc074878b888 (image=quay.io/ceph/ceph:v20, name=clever_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 01 14:49:55 compute-0 systemd[1]: Started libpod-conmon-2e91fd9563fb20d6e7d18fe6154faf46adb7cd9e1f3f6b3fc277bc074878b888.scope.
Feb 01 14:49:55 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:49:55 compute-0 podman[74431]: 2026-02-01 14:49:55.847288594 +0000 UTC m=+0.023241349 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:49:55 compute-0 podman[74431]: 2026-02-01 14:49:55.943393203 +0000 UTC m=+0.119345908 container init 2e91fd9563fb20d6e7d18fe6154faf46adb7cd9e1f3f6b3fc277bc074878b888 (image=quay.io/ceph/ceph:v20, name=clever_kare, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:49:55 compute-0 podman[74431]: 2026-02-01 14:49:55.947248452 +0000 UTC m=+0.123201117 container start 2e91fd9563fb20d6e7d18fe6154faf46adb7cd9e1f3f6b3fc277bc074878b888 (image=quay.io/ceph/ceph:v20, name=clever_kare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:49:55 compute-0 podman[74431]: 2026-02-01 14:49:55.95104387 +0000 UTC m=+0.126996575 container attach 2e91fd9563fb20d6e7d18fe6154faf46adb7cd9e1f3f6b3fc277bc074878b888 (image=quay.io/ceph/ceph:v20, name=clever_kare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:49:55 compute-0 clever_kare[74447]: AQATaH9pcU0/OhAA5/qznE0OF88dqcubdDoRWg==
Feb 01 14:49:55 compute-0 systemd[1]: libpod-2e91fd9563fb20d6e7d18fe6154faf46adb7cd9e1f3f6b3fc277bc074878b888.scope: Deactivated successfully.
Feb 01 14:49:55 compute-0 podman[74431]: 2026-02-01 14:49:55.981253355 +0000 UTC m=+0.157206010 container died 2e91fd9563fb20d6e7d18fe6154faf46adb7cd9e1f3f6b3fc277bc074878b888 (image=quay.io/ceph/ceph:v20, name=clever_kare, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:49:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-521ca0820afe508ddc79a29da2ac46cc6115005ec01cfb51a19d8e5f050c9bf7-merged.mount: Deactivated successfully.
Feb 01 14:49:56 compute-0 podman[74431]: 2026-02-01 14:49:56.008938448 +0000 UTC m=+0.184891103 container remove 2e91fd9563fb20d6e7d18fe6154faf46adb7cd9e1f3f6b3fc277bc074878b888 (image=quay.io/ceph/ceph:v20, name=clever_kare, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 01 14:49:56 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 01 14:49:56 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 01 14:49:56 compute-0 systemd[1]: libpod-conmon-2e91fd9563fb20d6e7d18fe6154faf46adb7cd9e1f3f6b3fc277bc074878b888.scope: Deactivated successfully.
Feb 01 14:49:56 compute-0 podman[74466]: 2026-02-01 14:49:56.065482918 +0000 UTC m=+0.040138017 container create 4ec0e3363f0afdd71f3776ee3e272fbd99c123d0f00abe06c4e6a47a9f026086 (image=quay.io/ceph/ceph:v20, name=clever_tesla, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:49:56 compute-0 systemd[1]: Started libpod-conmon-4ec0e3363f0afdd71f3776ee3e272fbd99c123d0f00abe06c4e6a47a9f026086.scope.
Feb 01 14:49:56 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:49:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcfbc1e71b70e2e4a3bac4223cca4c065a30e21007e28e13933b952f9d5d6ba4/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Feb 01 14:49:56 compute-0 podman[74466]: 2026-02-01 14:49:56.121995647 +0000 UTC m=+0.096650846 container init 4ec0e3363f0afdd71f3776ee3e272fbd99c123d0f00abe06c4e6a47a9f026086 (image=quay.io/ceph/ceph:v20, name=clever_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:49:56 compute-0 podman[74466]: 2026-02-01 14:49:56.129946712 +0000 UTC m=+0.104601841 container start 4ec0e3363f0afdd71f3776ee3e272fbd99c123d0f00abe06c4e6a47a9f026086 (image=quay.io/ceph/ceph:v20, name=clever_tesla, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:49:56 compute-0 podman[74466]: 2026-02-01 14:49:56.134178222 +0000 UTC m=+0.108833401 container attach 4ec0e3363f0afdd71f3776ee3e272fbd99c123d0f00abe06c4e6a47a9f026086 (image=quay.io/ceph/ceph:v20, name=clever_tesla, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 01 14:49:56 compute-0 podman[74466]: 2026-02-01 14:49:56.048663432 +0000 UTC m=+0.023318561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:49:56 compute-0 clever_tesla[74482]: /usr/bin/monmaptool: monmap file /tmp/monmap
Feb 01 14:49:56 compute-0 clever_tesla[74482]: setting min_mon_release = tentacle
Feb 01 14:49:56 compute-0 clever_tesla[74482]: /usr/bin/monmaptool: set fsid to 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:49:56 compute-0 clever_tesla[74482]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Feb 01 14:49:56 compute-0 systemd[1]: libpod-4ec0e3363f0afdd71f3776ee3e272fbd99c123d0f00abe06c4e6a47a9f026086.scope: Deactivated successfully.
Feb 01 14:49:56 compute-0 podman[74466]: 2026-02-01 14:49:56.177717944 +0000 UTC m=+0.152373063 container died 4ec0e3363f0afdd71f3776ee3e272fbd99c123d0f00abe06c4e6a47a9f026086 (image=quay.io/ceph/ceph:v20, name=clever_tesla, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:49:56 compute-0 podman[74466]: 2026-02-01 14:49:56.213080815 +0000 UTC m=+0.187735934 container remove 4ec0e3363f0afdd71f3776ee3e272fbd99c123d0f00abe06c4e6a47a9f026086 (image=quay.io/ceph/ceph:v20, name=clever_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:49:56 compute-0 systemd[1]: libpod-conmon-4ec0e3363f0afdd71f3776ee3e272fbd99c123d0f00abe06c4e6a47a9f026086.scope: Deactivated successfully.
Feb 01 14:49:56 compute-0 podman[74502]: 2026-02-01 14:49:56.302704611 +0000 UTC m=+0.061054569 container create 9bb4014e331f8350cb3d49011a46c26cc5af5c49652a067e44a21305524b0eb8 (image=quay.io/ceph/ceph:v20, name=wizardly_austin, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:49:56 compute-0 systemd[1]: Started libpod-conmon-9bb4014e331f8350cb3d49011a46c26cc5af5c49652a067e44a21305524b0eb8.scope.
Feb 01 14:49:56 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:49:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d14fe84c648ecac1a14810993ccf3f4051a61ec949c6166ad2c290d11ef6674a/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:49:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d14fe84c648ecac1a14810993ccf3f4051a61ec949c6166ad2c290d11ef6674a/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Feb 01 14:49:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d14fe84c648ecac1a14810993ccf3f4051a61ec949c6166ad2c290d11ef6674a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:49:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d14fe84c648ecac1a14810993ccf3f4051a61ec949c6166ad2c290d11ef6674a/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 01 14:49:56 compute-0 podman[74502]: 2026-02-01 14:49:56.277496848 +0000 UTC m=+0.035846856 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:49:56 compute-0 podman[74502]: 2026-02-01 14:49:56.378392433 +0000 UTC m=+0.136742431 container init 9bb4014e331f8350cb3d49011a46c26cc5af5c49652a067e44a21305524b0eb8 (image=quay.io/ceph/ceph:v20, name=wizardly_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:49:56 compute-0 podman[74502]: 2026-02-01 14:49:56.392936754 +0000 UTC m=+0.151286712 container start 9bb4014e331f8350cb3d49011a46c26cc5af5c49652a067e44a21305524b0eb8 (image=quay.io/ceph/ceph:v20, name=wizardly_austin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 01 14:49:56 compute-0 podman[74502]: 2026-02-01 14:49:56.39703289 +0000 UTC m=+0.155382918 container attach 9bb4014e331f8350cb3d49011a46c26cc5af5c49652a067e44a21305524b0eb8 (image=quay.io/ceph/ceph:v20, name=wizardly_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 01 14:49:56 compute-0 systemd[1]: libpod-9bb4014e331f8350cb3d49011a46c26cc5af5c49652a067e44a21305524b0eb8.scope: Deactivated successfully.
Feb 01 14:49:56 compute-0 podman[74502]: 2026-02-01 14:49:56.499259273 +0000 UTC m=+0.257609201 container died 9bb4014e331f8350cb3d49011a46c26cc5af5c49652a067e44a21305524b0eb8 (image=quay.io/ceph/ceph:v20, name=wizardly_austin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 01 14:49:56 compute-0 podman[74502]: 2026-02-01 14:49:56.538263857 +0000 UTC m=+0.296613795 container remove 9bb4014e331f8350cb3d49011a46c26cc5af5c49652a067e44a21305524b0eb8 (image=quay.io/ceph/ceph:v20, name=wizardly_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 01 14:49:56 compute-0 systemd[1]: libpod-conmon-9bb4014e331f8350cb3d49011a46c26cc5af5c49652a067e44a21305524b0eb8.scope: Deactivated successfully.
Feb 01 14:49:56 compute-0 systemd[1]: Reloading.
Feb 01 14:49:56 compute-0 systemd-rc-local-generator[74586]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:49:56 compute-0 systemd-sysv-generator[74589]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:49:56 compute-0 systemd[1]: Reloading.
Feb 01 14:49:56 compute-0 systemd-rc-local-generator[74620]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:49:56 compute-0 systemd-sysv-generator[74625]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:49:57 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Feb 01 14:49:57 compute-0 systemd[1]: Reloading.
Feb 01 14:49:57 compute-0 systemd-rc-local-generator[74654]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:49:57 compute-0 systemd-sysv-generator[74657]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:49:57 compute-0 systemd[1]: Reached target Ceph cluster 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb 01 14:49:57 compute-0 systemd[1]: Reloading.
Feb 01 14:49:57 compute-0 systemd-sysv-generator[74697]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:49:57 compute-0 systemd-rc-local-generator[74694]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:49:57 compute-0 systemd[1]: Reloading.
Feb 01 14:49:57 compute-0 systemd-rc-local-generator[74738]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:49:57 compute-0 systemd-sysv-generator[74741]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:49:57 compute-0 systemd[1]: Created slice Slice /system/ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb 01 14:49:57 compute-0 systemd[1]: Reached target System Time Set.
Feb 01 14:49:57 compute-0 systemd[1]: Reached target System Time Synchronized.
Feb 01 14:49:57 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb 01 14:49:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 01 14:49:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 01 14:49:57 compute-0 podman[74796]: 2026-02-01 14:49:57.940936158 +0000 UTC m=+0.046715743 container create 1a7992cf4fd21d61043d19f015b7ab5f12d581f0bae0bec0dcb58ede0a6364a4 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70ed5d548b4ae2b619d03227f2925dd04965bf2d40a59fdb81d8db35ef25fbfe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70ed5d548b4ae2b619d03227f2925dd04965bf2d40a59fdb81d8db35ef25fbfe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70ed5d548b4ae2b619d03227f2925dd04965bf2d40a59fdb81d8db35ef25fbfe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70ed5d548b4ae2b619d03227f2925dd04965bf2d40a59fdb81d8db35ef25fbfe/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 01 14:49:57 compute-0 podman[74796]: 2026-02-01 14:49:57.998375873 +0000 UTC m=+0.104155468 container init 1a7992cf4fd21d61043d19f015b7ab5f12d581f0bae0bec0dcb58ede0a6364a4 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 01 14:49:58 compute-0 podman[74796]: 2026-02-01 14:49:58.005769343 +0000 UTC m=+0.111548908 container start 1a7992cf4fd21d61043d19f015b7ab5f12d581f0bae0bec0dcb58ede0a6364a4 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:49:58 compute-0 bash[74796]: 1a7992cf4fd21d61043d19f015b7ab5f12d581f0bae0bec0dcb58ede0a6364a4
Feb 01 14:49:58 compute-0 podman[74796]: 2026-02-01 14:49:57.917475814 +0000 UTC m=+0.023255429 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:49:58 compute-0 systemd[1]: Started Ceph mon.compute-0 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb 01 14:49:58 compute-0 ceph-mon[74815]: set uid:gid to 167:167 (ceph:ceph)
Feb 01 14:49:58 compute-0 ceph-mon[74815]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Feb 01 14:49:58 compute-0 ceph-mon[74815]: pidfile_write: ignore empty --pid-file
Feb 01 14:49:58 compute-0 ceph-mon[74815]: load: jerasure load: lrc 
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: RocksDB version: 7.9.2
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: Git sha 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: DB SUMMARY
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: DB Session ID:  K5YBZO4V0HPEJZNFFZIL
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: CURRENT file:  CURRENT
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: IDENTITY file:  IDENTITY
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                         Options.error_if_exists: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                       Options.create_if_missing: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                         Options.paranoid_checks: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                                     Options.env: 0x56348156d440
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                                      Options.fs: PosixFileSystem
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                                Options.info_log: 0x5634833e73e0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                Options.max_file_opening_threads: 16
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                              Options.statistics: (nil)
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                               Options.use_fsync: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                       Options.max_log_file_size: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                         Options.allow_fallocate: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                        Options.use_direct_reads: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:          Options.create_missing_column_families: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                              Options.db_log_dir: 
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                                 Options.wal_dir: 
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                   Options.advise_random_on_open: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                    Options.write_buffer_manager: 0x563483366140
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                            Options.rate_limiter: (nil)
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                  Options.unordered_write: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                               Options.row_cache: None
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                              Options.wal_filter: None
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:             Options.allow_ingest_behind: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:             Options.two_write_queues: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:             Options.manual_wal_flush: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:             Options.wal_compression: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:             Options.atomic_flush: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                 Options.log_readahead_size: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:             Options.allow_data_in_errors: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:             Options.db_host_id: __hostname__
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:             Options.max_background_jobs: 2
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:             Options.max_background_compactions: -1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:             Options.max_subcompactions: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:             Options.max_total_wal_size: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                          Options.max_open_files: -1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                          Options.bytes_per_sync: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:       Options.compaction_readahead_size: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                  Options.max_background_flushes: -1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: Compression algorithms supported:
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:         kZSTD supported: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:         kXpressCompression supported: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:         kBZip2Compression supported: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:         kLZ4Compression supported: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:         kZlibCompression supported: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:         kLZ4HCCompression supported: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:         kSnappyCompression supported: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:           Options.merge_operator: 
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:        Options.compaction_filter: None
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563483372600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5634833578d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:        Options.write_buffer_size: 33554432
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:  Options.max_write_buffer_number: 2
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:          Options.compression: NoCompression
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:             Options.num_levels: 7
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 22ff331c-3ab9-4629-8bb9-0845546f6646
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957398061993, "job": 1, "event": "recovery_started", "wal_files": [4]}
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957398068450, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "K5YBZO4V0HPEJZNFFZIL", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957398068570, "job": 1, "event": "recovery_finished"}
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Feb 01 14:49:58 compute-0 podman[74816]: 2026-02-01 14:49:58.078237063 +0000 UTC m=+0.043362098 container create 39f5ab5a24c20c520a4124c36a1f2988e0a2783f3733be33d46acadf96899d94 (image=quay.io/ceph/ceph:v20, name=hopeful_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563483384e00
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: DB pointer 0x5634834d0000
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 14:49:58 compute-0 ceph-mon[74815]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.09 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.09 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5634833578d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 01 14:49:58 compute-0 ceph-mon[74815]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@-1(???) e0 preinit fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(probing) e0 win_standalone_election
Feb 01 14:49:58 compute-0 ceph-mon[74815]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 01 14:49:58 compute-0 ceph-mon[74815]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(probing) e1 win_standalone_election
Feb 01 14:49:58 compute-0 ceph-mon[74815]: paxos.0).electionLogic(2) init, last seen epoch 2
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 01 14:49:58 compute-0 ceph-mon[74815]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 01 14:49:58 compute-0 ceph-mon[74815]: log_channel(cluster) log [DBG] : monmap epoch 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: log_channel(cluster) log [DBG] : fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:49:58 compute-0 ceph-mon[74815]: log_channel(cluster) log [DBG] : last_changed 2026-02-01T14:49:56.174590+0000
Feb 01 14:49:58 compute-0 ceph-mon[74815]: log_channel(cluster) log [DBG] : created 2026-02-01T14:49:56.174590+0000
Feb 01 14:49:58 compute-0 ceph-mon[74815]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Feb 01 14:49:58 compute-0 ceph-mon[74815]: log_channel(cluster) log [DBG] : election_strategy: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2026-02-01T14:49:56.448252Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864300,os=Linux}
Feb 01 14:49:58 compute-0 systemd[1]: Started libpod-conmon-39f5ab5a24c20c520a4124c36a1f2988e0a2783f3733be33d46acadf96899d94.scope.
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader).mds e1 new map
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2026-02-01T14:49:58:117399+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Feb 01 14:49:58 compute-0 ceph-mon[74815]: log_channel(cluster) log [DBG] : fsmap 
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mkfs 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Feb 01 14:49:58 compute-0 ceph-mon[74815]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Feb 01 14:49:58 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:49:58 compute-0 ceph-mon[74815]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Feb 01 14:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d530cffa6c0577a421f423b0fb914bcecd2801c896d66f51ea2639a84f527f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d530cffa6c0577a421f423b0fb914bcecd2801c896d66f51ea2639a84f527f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 01 14:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d530cffa6c0577a421f423b0fb914bcecd2801c896d66f51ea2639a84f527f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 01 14:49:58 compute-0 podman[74816]: 2026-02-01 14:49:58.054225604 +0000 UTC m=+0.019350659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:49:58 compute-0 podman[74816]: 2026-02-01 14:49:58.167549651 +0000 UTC m=+0.132674696 container init 39f5ab5a24c20c520a4124c36a1f2988e0a2783f3733be33d46acadf96899d94 (image=quay.io/ceph/ceph:v20, name=hopeful_shannon, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:49:58 compute-0 podman[74816]: 2026-02-01 14:49:58.175464734 +0000 UTC m=+0.140589809 container start 39f5ab5a24c20c520a4124c36a1f2988e0a2783f3733be33d46acadf96899d94 (image=quay.io/ceph/ceph:v20, name=hopeful_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:49:58 compute-0 podman[74816]: 2026-02-01 14:49:58.179038616 +0000 UTC m=+0.144163701 container attach 39f5ab5a24c20c520a4124c36a1f2988e0a2783f3733be33d46acadf96899d94 (image=quay.io/ceph/ceph:v20, name=hopeful_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Feb 01 14:49:58 compute-0 ceph-mon[74815]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2867840613' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 01 14:49:58 compute-0 hopeful_shannon[74870]:   cluster:
Feb 01 14:49:58 compute-0 hopeful_shannon[74870]:     id:     2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:49:58 compute-0 hopeful_shannon[74870]:     health: HEALTH_OK
Feb 01 14:49:58 compute-0 hopeful_shannon[74870]:  
Feb 01 14:49:58 compute-0 hopeful_shannon[74870]:   services:
Feb 01 14:49:58 compute-0 hopeful_shannon[74870]:     mon: 1 daemons, quorum compute-0 (age 0.2515s) [leader: compute-0]
Feb 01 14:49:58 compute-0 hopeful_shannon[74870]:     mgr: no daemons active
Feb 01 14:49:58 compute-0 hopeful_shannon[74870]:     osd: 0 osds: 0 up, 0 in
Feb 01 14:49:58 compute-0 hopeful_shannon[74870]:  
Feb 01 14:49:58 compute-0 hopeful_shannon[74870]:   data:
Feb 01 14:49:58 compute-0 hopeful_shannon[74870]:     pools:   0 pools, 0 pgs
Feb 01 14:49:58 compute-0 hopeful_shannon[74870]:     objects: 0 objects, 0 B
Feb 01 14:49:58 compute-0 hopeful_shannon[74870]:     usage:   0 B used, 0 B / 0 B avail
Feb 01 14:49:58 compute-0 hopeful_shannon[74870]:     pgs:     
Feb 01 14:49:58 compute-0 hopeful_shannon[74870]:  
Feb 01 14:49:58 compute-0 systemd[1]: libpod-39f5ab5a24c20c520a4124c36a1f2988e0a2783f3733be33d46acadf96899d94.scope: Deactivated successfully.
Feb 01 14:49:58 compute-0 podman[74897]: 2026-02-01 14:49:58.433606899 +0000 UTC m=+0.037909584 container died 39f5ab5a24c20c520a4124c36a1f2988e0a2783f3733be33d46acadf96899d94 (image=quay.io/ceph/ceph:v20, name=hopeful_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:49:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3d530cffa6c0577a421f423b0fb914bcecd2801c896d66f51ea2639a84f527f-merged.mount: Deactivated successfully.
Feb 01 14:49:58 compute-0 podman[74897]: 2026-02-01 14:49:58.476875114 +0000 UTC m=+0.081177789 container remove 39f5ab5a24c20c520a4124c36a1f2988e0a2783f3733be33d46acadf96899d94 (image=quay.io/ceph/ceph:v20, name=hopeful_shannon, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:49:58 compute-0 systemd[1]: libpod-conmon-39f5ab5a24c20c520a4124c36a1f2988e0a2783f3733be33d46acadf96899d94.scope: Deactivated successfully.
Feb 01 14:49:58 compute-0 podman[74912]: 2026-02-01 14:49:58.555562549 +0000 UTC m=+0.049552022 container create 6884bd8a4bb46c149f242c7eb4d16f4969ec16478b7e4a8208c2e1871a61378d (image=quay.io/ceph/ceph:v20, name=keen_robinson, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 01 14:49:58 compute-0 systemd[1]: Started libpod-conmon-6884bd8a4bb46c149f242c7eb4d16f4969ec16478b7e4a8208c2e1871a61378d.scope.
Feb 01 14:49:58 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1130ee3875f51f79aa1f01a8fc533237008b6e9fbd471e2199526ab25e76e2cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1130ee3875f51f79aa1f01a8fc533237008b6e9fbd471e2199526ab25e76e2cd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1130ee3875f51f79aa1f01a8fc533237008b6e9fbd471e2199526ab25e76e2cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1130ee3875f51f79aa1f01a8fc533237008b6e9fbd471e2199526ab25e76e2cd/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 01 14:49:58 compute-0 podman[74912]: 2026-02-01 14:49:58.536757288 +0000 UTC m=+0.030746761 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:49:58 compute-0 podman[74912]: 2026-02-01 14:49:58.658927894 +0000 UTC m=+0.152917447 container init 6884bd8a4bb46c149f242c7eb4d16f4969ec16478b7e4a8208c2e1871a61378d (image=quay.io/ceph/ceph:v20, name=keen_robinson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 01 14:49:58 compute-0 podman[74912]: 2026-02-01 14:49:58.664941844 +0000 UTC m=+0.158931287 container start 6884bd8a4bb46c149f242c7eb4d16f4969ec16478b7e4a8208c2e1871a61378d (image=quay.io/ceph/ceph:v20, name=keen_robinson, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Feb 01 14:49:58 compute-0 podman[74912]: 2026-02-01 14:49:58.669495113 +0000 UTC m=+0.163484566 container attach 6884bd8a4bb46c149f242c7eb4d16f4969ec16478b7e4a8208c2e1871a61378d (image=quay.io/ceph/ceph:v20, name=keen_robinson, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:49:58 compute-0 ceph-mon[74815]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb 01 14:49:58 compute-0 ceph-mon[74815]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1195388091' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 01 14:49:58 compute-0 ceph-mon[74815]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1195388091' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb 01 14:49:58 compute-0 keen_robinson[74929]: 
Feb 01 14:49:58 compute-0 keen_robinson[74929]: [global]
Feb 01 14:49:58 compute-0 keen_robinson[74929]:         fsid = 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:49:58 compute-0 keen_robinson[74929]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Feb 01 14:49:58 compute-0 keen_robinson[74929]:         osd_crush_chooseleaf_type = 0
Feb 01 14:49:58 compute-0 systemd[1]: libpod-6884bd8a4bb46c149f242c7eb4d16f4969ec16478b7e4a8208c2e1871a61378d.scope: Deactivated successfully.
Feb 01 14:49:58 compute-0 podman[74955]: 2026-02-01 14:49:58.897970448 +0000 UTC m=+0.025562174 container died 6884bd8a4bb46c149f242c7eb4d16f4969ec16478b7e4a8208c2e1871a61378d (image=quay.io/ceph/ceph:v20, name=keen_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:49:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-1130ee3875f51f79aa1f01a8fc533237008b6e9fbd471e2199526ab25e76e2cd-merged.mount: Deactivated successfully.
Feb 01 14:49:59 compute-0 podman[74955]: 2026-02-01 14:49:59.042549229 +0000 UTC m=+0.170140945 container remove 6884bd8a4bb46c149f242c7eb4d16f4969ec16478b7e4a8208c2e1871a61378d (image=quay.io/ceph/ceph:v20, name=keen_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:49:59 compute-0 systemd[1]: libpod-conmon-6884bd8a4bb46c149f242c7eb4d16f4969ec16478b7e4a8208c2e1871a61378d.scope: Deactivated successfully.
Feb 01 14:49:59 compute-0 podman[74972]: 2026-02-01 14:49:59.09062892 +0000 UTC m=+0.031214725 container create 101ef987131040b7570de17179bf398010e4585876d1f507da1b2a6569709a6d (image=quay.io/ceph/ceph:v20, name=sweet_edison, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:49:59 compute-0 systemd[1]: Started libpod-conmon-101ef987131040b7570de17179bf398010e4585876d1f507da1b2a6569709a6d.scope.
Feb 01 14:49:59 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb5150d06143fb5557163a0bb3b6069a4620d4674e118e08b2168fd132f7ec3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb5150d06143fb5557163a0bb3b6069a4620d4674e118e08b2168fd132f7ec3b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb5150d06143fb5557163a0bb3b6069a4620d4674e118e08b2168fd132f7ec3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb5150d06143fb5557163a0bb3b6069a4620d4674e118e08b2168fd132f7ec3b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 01 14:49:59 compute-0 ceph-mon[74815]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 01 14:49:59 compute-0 ceph-mon[74815]: monmap epoch 1
Feb 01 14:49:59 compute-0 ceph-mon[74815]: fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:49:59 compute-0 ceph-mon[74815]: last_changed 2026-02-01T14:49:56.174590+0000
Feb 01 14:49:59 compute-0 ceph-mon[74815]: created 2026-02-01T14:49:56.174590+0000
Feb 01 14:49:59 compute-0 ceph-mon[74815]: min_mon_release 20 (tentacle)
Feb 01 14:49:59 compute-0 ceph-mon[74815]: election_strategy: 1
Feb 01 14:49:59 compute-0 ceph-mon[74815]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb 01 14:49:59 compute-0 ceph-mon[74815]: fsmap 
Feb 01 14:49:59 compute-0 ceph-mon[74815]: osdmap e1: 0 total, 0 up, 0 in
Feb 01 14:49:59 compute-0 ceph-mon[74815]: mgrmap e1: no daemons active
Feb 01 14:49:59 compute-0 ceph-mon[74815]: from='client.? 192.168.122.100:0/2867840613' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 01 14:49:59 compute-0 ceph-mon[74815]: from='client.? 192.168.122.100:0/1195388091' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 01 14:49:59 compute-0 ceph-mon[74815]: from='client.? 192.168.122.100:0/1195388091' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb 01 14:49:59 compute-0 podman[74972]: 2026-02-01 14:49:59.168509684 +0000 UTC m=+0.109095469 container init 101ef987131040b7570de17179bf398010e4585876d1f507da1b2a6569709a6d (image=quay.io/ceph/ceph:v20, name=sweet_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:49:59 compute-0 podman[74972]: 2026-02-01 14:49:59.075551053 +0000 UTC m=+0.016136858 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:49:59 compute-0 podman[74972]: 2026-02-01 14:49:59.175212453 +0000 UTC m=+0.115798228 container start 101ef987131040b7570de17179bf398010e4585876d1f507da1b2a6569709a6d (image=quay.io/ceph/ceph:v20, name=sweet_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 01 14:49:59 compute-0 podman[74972]: 2026-02-01 14:49:59.178462425 +0000 UTC m=+0.119048250 container attach 101ef987131040b7570de17179bf398010e4585876d1f507da1b2a6569709a6d (image=quay.io/ceph/ceph:v20, name=sweet_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:49:59 compute-0 ceph-mon[74815]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:49:59 compute-0 ceph-mon[74815]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4183917051' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:49:59 compute-0 systemd[1]: libpod-101ef987131040b7570de17179bf398010e4585876d1f507da1b2a6569709a6d.scope: Deactivated successfully.
Feb 01 14:49:59 compute-0 podman[74972]: 2026-02-01 14:49:59.394657683 +0000 UTC m=+0.335243488 container died 101ef987131040b7570de17179bf398010e4585876d1f507da1b2a6569709a6d (image=quay.io/ceph/ceph:v20, name=sweet_edison, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 01 14:49:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb5150d06143fb5557163a0bb3b6069a4620d4674e118e08b2168fd132f7ec3b-merged.mount: Deactivated successfully.
Feb 01 14:49:59 compute-0 podman[74972]: 2026-02-01 14:49:59.43765069 +0000 UTC m=+0.378236475 container remove 101ef987131040b7570de17179bf398010e4585876d1f507da1b2a6569709a6d (image=quay.io/ceph/ceph:v20, name=sweet_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 01 14:49:59 compute-0 systemd[1]: libpod-conmon-101ef987131040b7570de17179bf398010e4585876d1f507da1b2a6569709a6d.scope: Deactivated successfully.
Feb 01 14:49:59 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb 01 14:49:59 compute-0 ceph-mon[74815]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Feb 01 14:49:59 compute-0 ceph-mon[74815]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Feb 01 14:49:59 compute-0 ceph-mon[74815]: mon.compute-0@0(leader) e1 shutdown
Feb 01 14:49:59 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0[74811]: 2026-02-01T14:49:59.630+0000 7f4e50781640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Feb 01 14:49:59 compute-0 ceph-mon[74815]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb 01 14:49:59 compute-0 ceph-mon[74815]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb 01 14:49:59 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0[74811]: 2026-02-01T14:49:59.630+0000 7f4e50781640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Feb 01 14:49:59 compute-0 podman[75055]: 2026-02-01 14:49:59.885698058 +0000 UTC m=+0.285394197 container died 1a7992cf4fd21d61043d19f015b7ab5f12d581f0bae0bec0dcb58ede0a6364a4 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:49:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-70ed5d548b4ae2b619d03227f2925dd04965bf2d40a59fdb81d8db35ef25fbfe-merged.mount: Deactivated successfully.
Feb 01 14:49:59 compute-0 podman[75055]: 2026-02-01 14:49:59.922645073 +0000 UTC m=+0.322341242 container remove 1a7992cf4fd21d61043d19f015b7ab5f12d581f0bae0bec0dcb58ede0a6364a4 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:49:59 compute-0 bash[75055]: ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0
Feb 01 14:49:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 01 14:50:00 compute-0 systemd[1]: ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f@mon.compute-0.service: Deactivated successfully.
Feb 01 14:50:00 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb 01 14:50:00 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb 01 14:50:00 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 01 14:50:00 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 01 14:50:00 compute-0 podman[75159]: 2026-02-01 14:50:00.23563625 +0000 UTC m=+0.035272269 container create 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Feb 01 14:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf3559e0dbd98854c2d188c21775f26efc99d5eb7b2314c047ff1c2acbce4f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf3559e0dbd98854c2d188c21775f26efc99d5eb7b2314c047ff1c2acbce4f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf3559e0dbd98854c2d188c21775f26efc99d5eb7b2314c047ff1c2acbce4f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf3559e0dbd98854c2d188c21775f26efc99d5eb7b2314c047ff1c2acbce4f5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:00 compute-0 podman[75159]: 2026-02-01 14:50:00.282023543 +0000 UTC m=+0.081659552 container init 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:50:00 compute-0 podman[75159]: 2026-02-01 14:50:00.294418403 +0000 UTC m=+0.094054422 container start 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:00 compute-0 bash[75159]: 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41
Feb 01 14:50:00 compute-0 podman[75159]: 2026-02-01 14:50:00.218412923 +0000 UTC m=+0.018048942 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:00 compute-0 systemd[1]: Started Ceph mon.compute-0 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb 01 14:50:00 compute-0 ceph-mon[75179]: set uid:gid to 167:167 (ceph:ceph)
Feb 01 14:50:00 compute-0 ceph-mon[75179]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Feb 01 14:50:00 compute-0 ceph-mon[75179]: pidfile_write: ignore empty --pid-file
Feb 01 14:50:00 compute-0 ceph-mon[75179]: load: jerasure load: lrc 
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: RocksDB version: 7.9.2
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: Git sha 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: DB SUMMARY
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: DB Session ID:  9H8HU9QM155BYJ6W9TB0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: CURRENT file:  CURRENT
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: IDENTITY file:  IDENTITY
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60239 ; 
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                         Options.error_if_exists: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                       Options.create_if_missing: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                         Options.paranoid_checks: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                                     Options.env: 0x5635c4a03440
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                                      Options.fs: PosixFileSystem
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                                Options.info_log: 0x5635c5d0fe80
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                Options.max_file_opening_threads: 16
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                              Options.statistics: (nil)
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                               Options.use_fsync: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                       Options.max_log_file_size: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                         Options.allow_fallocate: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                        Options.use_direct_reads: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:          Options.create_missing_column_families: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                              Options.db_log_dir: 
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                                 Options.wal_dir: 
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                   Options.advise_random_on_open: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                    Options.write_buffer_manager: 0x5635c5d5a140
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                            Options.rate_limiter: (nil)
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                  Options.unordered_write: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                               Options.row_cache: None
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                              Options.wal_filter: None
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:             Options.allow_ingest_behind: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:             Options.two_write_queues: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:             Options.manual_wal_flush: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:             Options.wal_compression: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:             Options.atomic_flush: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                 Options.log_readahead_size: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:             Options.allow_data_in_errors: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:             Options.db_host_id: __hostname__
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:             Options.max_background_jobs: 2
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:             Options.max_background_compactions: -1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:             Options.max_subcompactions: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:             Options.max_total_wal_size: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                          Options.max_open_files: -1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                          Options.bytes_per_sync: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:       Options.compaction_readahead_size: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                  Options.max_background_flushes: -1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: Compression algorithms supported:
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:         kZSTD supported: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:         kXpressCompression supported: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:         kBZip2Compression supported: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:         kLZ4Compression supported: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:         kZlibCompression supported: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:         kLZ4HCCompression supported: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:         kSnappyCompression supported: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:           Options.merge_operator: 
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:        Options.compaction_filter: None
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5635c5d66a00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5635c5d4b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:        Options.write_buffer_size: 33554432
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:  Options.max_write_buffer_number: 2
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:          Options.compression: NoCompression
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:             Options.num_levels: 7
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 22ff331c-3ab9-4629-8bb9-0845546f6646
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957400356976, "job": 1, "event": "recovery_started", "wal_files": [9]}
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957400362520, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58438, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55790, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957400, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957400362631, "job": 1, "event": "recovery_finished"}
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5635c5d78e00
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: DB pointer 0x5635c5ec2000
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 14:50:00 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   60.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0   60.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 3.44 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 3.44 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5635c5d4b8d0#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 01 14:50:00 compute-0 ceph-mon[75179]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:50:00 compute-0 ceph-mon[75179]: mon.compute-0@-1(???) e1 preinit fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:50:00 compute-0 ceph-mon[75179]: mon.compute-0@-1(???).mds e1 new map
Feb 01 14:50:00 compute-0 ceph-mon[75179]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2026-02-01T14:49:58:117399+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Feb 01 14:50:00 compute-0 ceph-mon[75179]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Feb 01 14:50:00 compute-0 ceph-mon[75179]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 01 14:50:00 compute-0 ceph-mon[75179]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 01 14:50:00 compute-0 ceph-mon[75179]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 01 14:50:00 compute-0 ceph-mon[75179]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Feb 01 14:50:00 compute-0 ceph-mon[75179]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Feb 01 14:50:00 compute-0 ceph-mon[75179]: mon.compute-0@0(probing) e1 win_standalone_election
Feb 01 14:50:00 compute-0 ceph-mon[75179]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Feb 01 14:50:00 compute-0 podman[75180]: 2026-02-01 14:50:00.382349942 +0000 UTC m=+0.052349723 container create 2261a6149ba75232b4f50658dae27d37b90bb83b16142d5b06b8c51764dcf9be (image=quay.io/ceph/ceph:v20, name=great_lederberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 01 14:50:00 compute-0 ceph-mon[75179]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 01 14:50:00 compute-0 ceph-mon[75179]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 01 14:50:00 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : monmap epoch 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:50:00 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : last_changed 2026-02-01T14:49:56.174590+0000
Feb 01 14:50:00 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : created 2026-02-01T14:49:56.174590+0000
Feb 01 14:50:00 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Feb 01 14:50:00 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : election_strategy: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 01 14:50:00 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : fsmap 
Feb 01 14:50:00 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Feb 01 14:50:00 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Feb 01 14:50:00 compute-0 systemd[1]: Started libpod-conmon-2261a6149ba75232b4f50658dae27d37b90bb83b16142d5b06b8c51764dcf9be.scope.
Feb 01 14:50:00 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a12c00ea073014746946adbf38bbffc72e7794034ea9f8084e2201b3b7dde37f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a12c00ea073014746946adbf38bbffc72e7794034ea9f8084e2201b3b7dde37f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a12c00ea073014746946adbf38bbffc72e7794034ea9f8084e2201b3b7dde37f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:00 compute-0 ceph-mon[75179]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 01 14:50:00 compute-0 ceph-mon[75179]: monmap epoch 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:50:00 compute-0 ceph-mon[75179]: last_changed 2026-02-01T14:49:56.174590+0000
Feb 01 14:50:00 compute-0 ceph-mon[75179]: created 2026-02-01T14:49:56.174590+0000
Feb 01 14:50:00 compute-0 ceph-mon[75179]: min_mon_release 20 (tentacle)
Feb 01 14:50:00 compute-0 ceph-mon[75179]: election_strategy: 1
Feb 01 14:50:00 compute-0 ceph-mon[75179]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb 01 14:50:00 compute-0 ceph-mon[75179]: fsmap 
Feb 01 14:50:00 compute-0 ceph-mon[75179]: osdmap e1: 0 total, 0 up, 0 in
Feb 01 14:50:00 compute-0 ceph-mon[75179]: mgrmap e1: no daemons active
Feb 01 14:50:00 compute-0 podman[75180]: 2026-02-01 14:50:00.455688117 +0000 UTC m=+0.125687978 container init 2261a6149ba75232b4f50658dae27d37b90bb83b16142d5b06b8c51764dcf9be (image=quay.io/ceph/ceph:v20, name=great_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 01 14:50:00 compute-0 podman[75180]: 2026-02-01 14:50:00.364541458 +0000 UTC m=+0.034541269 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:00 compute-0 podman[75180]: 2026-02-01 14:50:00.461023758 +0000 UTC m=+0.131023539 container start 2261a6149ba75232b4f50658dae27d37b90bb83b16142d5b06b8c51764dcf9be (image=quay.io/ceph/ceph:v20, name=great_lederberg, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 01 14:50:00 compute-0 podman[75180]: 2026-02-01 14:50:00.469596631 +0000 UTC m=+0.139596432 container attach 2261a6149ba75232b4f50658dae27d37b90bb83b16142d5b06b8c51764dcf9be (image=quay.io/ceph/ceph:v20, name=great_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Feb 01 14:50:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Feb 01 14:50:00 compute-0 systemd[1]: libpod-2261a6149ba75232b4f50658dae27d37b90bb83b16142d5b06b8c51764dcf9be.scope: Deactivated successfully.
Feb 01 14:50:00 compute-0 podman[75180]: 2026-02-01 14:50:00.686966921 +0000 UTC m=+0.356966722 container died 2261a6149ba75232b4f50658dae27d37b90bb83b16142d5b06b8c51764dcf9be (image=quay.io/ceph/ceph:v20, name=great_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 01 14:50:00 compute-0 podman[75180]: 2026-02-01 14:50:00.728452825 +0000 UTC m=+0.398452606 container remove 2261a6149ba75232b4f50658dae27d37b90bb83b16142d5b06b8c51764dcf9be (image=quay.io/ceph/ceph:v20, name=great_lederberg, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 01 14:50:00 compute-0 systemd[1]: libpod-conmon-2261a6149ba75232b4f50658dae27d37b90bb83b16142d5b06b8c51764dcf9be.scope: Deactivated successfully.
Feb 01 14:50:00 compute-0 podman[75270]: 2026-02-01 14:50:00.803745666 +0000 UTC m=+0.051879829 container create 1eac4156d43de32b7f3ca6e9560d34ebd4f8726be0ea440b880feac4a96b1a36 (image=quay.io/ceph/ceph:v20, name=peaceful_shockley, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:50:00 compute-0 systemd[1]: Started libpod-conmon-1eac4156d43de32b7f3ca6e9560d34ebd4f8726be0ea440b880feac4a96b1a36.scope.
Feb 01 14:50:00 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e8b083639cd35f2e3418eb30bb6ce75044d80a153dadee7eb0f44cd090b3a1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e8b083639cd35f2e3418eb30bb6ce75044d80a153dadee7eb0f44cd090b3a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e8b083639cd35f2e3418eb30bb6ce75044d80a153dadee7eb0f44cd090b3a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:00 compute-0 podman[75270]: 2026-02-01 14:50:00.785715216 +0000 UTC m=+0.033849379 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:00 compute-0 podman[75270]: 2026-02-01 14:50:00.901701258 +0000 UTC m=+0.149835461 container init 1eac4156d43de32b7f3ca6e9560d34ebd4f8726be0ea440b880feac4a96b1a36 (image=quay.io/ceph/ceph:v20, name=peaceful_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:50:00 compute-0 podman[75270]: 2026-02-01 14:50:00.909368215 +0000 UTC m=+0.157502368 container start 1eac4156d43de32b7f3ca6e9560d34ebd4f8726be0ea440b880feac4a96b1a36 (image=quay.io/ceph/ceph:v20, name=peaceful_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 01 14:50:00 compute-0 podman[75270]: 2026-02-01 14:50:00.920209162 +0000 UTC m=+0.168343385 container attach 1eac4156d43de32b7f3ca6e9560d34ebd4f8726be0ea440b880feac4a96b1a36 (image=quay.io/ceph/ceph:v20, name=peaceful_shockley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 01 14:50:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Feb 01 14:50:01 compute-0 systemd[1]: libpod-1eac4156d43de32b7f3ca6e9560d34ebd4f8726be0ea440b880feac4a96b1a36.scope: Deactivated successfully.
Feb 01 14:50:01 compute-0 podman[75270]: 2026-02-01 14:50:01.147440621 +0000 UTC m=+0.395574774 container died 1eac4156d43de32b7f3ca6e9560d34ebd4f8726be0ea440b880feac4a96b1a36 (image=quay.io/ceph/ceph:v20, name=peaceful_shockley, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-82e8b083639cd35f2e3418eb30bb6ce75044d80a153dadee7eb0f44cd090b3a1-merged.mount: Deactivated successfully.
Feb 01 14:50:01 compute-0 podman[75270]: 2026-02-01 14:50:01.185267642 +0000 UTC m=+0.433401765 container remove 1eac4156d43de32b7f3ca6e9560d34ebd4f8726be0ea440b880feac4a96b1a36 (image=quay.io/ceph/ceph:v20, name=peaceful_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:50:01 compute-0 systemd[1]: libpod-conmon-1eac4156d43de32b7f3ca6e9560d34ebd4f8726be0ea440b880feac4a96b1a36.scope: Deactivated successfully.
Feb 01 14:50:01 compute-0 systemd[1]: Reloading.
Feb 01 14:50:01 compute-0 systemd-sysv-generator[75355]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:50:01 compute-0 systemd-rc-local-generator[75347]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:50:01 compute-0 systemd[1]: Reloading.
Feb 01 14:50:01 compute-0 systemd-sysv-generator[75394]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:50:01 compute-0 systemd-rc-local-generator[75391]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:50:01 compute-0 systemd[1]: Starting Ceph mgr.compute-0.viosrg for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb 01 14:50:02 compute-0 podman[75450]: 2026-02-01 14:50:02.041445179 +0000 UTC m=+0.048739970 container create c0b520f4a0119ce9f8a9371a92144a204b1e0b06ca11020b37e89fb67c28dbf0 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c63d461a9ad17540db43665e055dcde16173cf80c235d6007abc5404513771bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c63d461a9ad17540db43665e055dcde16173cf80c235d6007abc5404513771bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c63d461a9ad17540db43665e055dcde16173cf80c235d6007abc5404513771bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c63d461a9ad17540db43665e055dcde16173cf80c235d6007abc5404513771bb/merged/var/lib/ceph/mgr/ceph-compute-0.viosrg supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:02 compute-0 podman[75450]: 2026-02-01 14:50:02.103256538 +0000 UTC m=+0.110551379 container init c0b520f4a0119ce9f8a9371a92144a204b1e0b06ca11020b37e89fb67c28dbf0 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 01 14:50:02 compute-0 podman[75450]: 2026-02-01 14:50:02.016283877 +0000 UTC m=+0.023578718 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:02 compute-0 podman[75450]: 2026-02-01 14:50:02.111910503 +0000 UTC m=+0.119205304 container start c0b520f4a0119ce9f8a9371a92144a204b1e0b06ca11020b37e89fb67c28dbf0 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:02 compute-0 bash[75450]: c0b520f4a0119ce9f8a9371a92144a204b1e0b06ca11020b37e89fb67c28dbf0
Feb 01 14:50:02 compute-0 systemd[1]: Started Ceph mgr.compute-0.viosrg for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb 01 14:50:02 compute-0 ceph-mgr[75469]: set uid:gid to 167:167 (ceph:ceph)
Feb 01 14:50:02 compute-0 ceph-mgr[75469]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Feb 01 14:50:02 compute-0 ceph-mgr[75469]: pidfile_write: ignore empty --pid-file
Feb 01 14:50:02 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'alerts'
Feb 01 14:50:02 compute-0 podman[75470]: 2026-02-01 14:50:02.214406882 +0000 UTC m=+0.061642865 container create eef9cc41736c1f735bf02e1e87e719e99e7f79531b9630d721377c99ccecd961 (image=quay.io/ceph/ceph:v20, name=epic_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:50:02 compute-0 systemd[1]: Started libpod-conmon-eef9cc41736c1f735bf02e1e87e719e99e7f79531b9630d721377c99ccecd961.scope.
Feb 01 14:50:02 compute-0 podman[75470]: 2026-02-01 14:50:02.189451266 +0000 UTC m=+0.036687329 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:02 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'balancer'
Feb 01 14:50:02 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1aea80a6f2121aa55a109b745afad73d4d2520726ee91210655c39d326e4ef6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1aea80a6f2121aa55a109b745afad73d4d2520726ee91210655c39d326e4ef6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1aea80a6f2121aa55a109b745afad73d4d2520726ee91210655c39d326e4ef6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:02 compute-0 podman[75470]: 2026-02-01 14:50:02.318602811 +0000 UTC m=+0.165838864 container init eef9cc41736c1f735bf02e1e87e719e99e7f79531b9630d721377c99ccecd961 (image=quay.io/ceph/ceph:v20, name=epic_chaum, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 01 14:50:02 compute-0 podman[75470]: 2026-02-01 14:50:02.325063104 +0000 UTC m=+0.172299097 container start eef9cc41736c1f735bf02e1e87e719e99e7f79531b9630d721377c99ccecd961 (image=quay.io/ceph/ceph:v20, name=epic_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:50:02 compute-0 podman[75470]: 2026-02-01 14:50:02.329107738 +0000 UTC m=+0.176343821 container attach eef9cc41736c1f735bf02e1e87e719e99e7f79531b9630d721377c99ccecd961 (image=quay.io/ceph/ceph:v20, name=epic_chaum, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:50:02 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'cephadm'
Feb 01 14:50:02 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb 01 14:50:02 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4243542664' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 01 14:50:02 compute-0 epic_chaum[75507]: 
Feb 01 14:50:02 compute-0 epic_chaum[75507]: {
Feb 01 14:50:02 compute-0 epic_chaum[75507]:     "fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:50:02 compute-0 epic_chaum[75507]:     "health": {
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "status": "HEALTH_OK",
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "checks": {},
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "mutes": []
Feb 01 14:50:02 compute-0 epic_chaum[75507]:     },
Feb 01 14:50:02 compute-0 epic_chaum[75507]:     "election_epoch": 5,
Feb 01 14:50:02 compute-0 epic_chaum[75507]:     "quorum": [
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         0
Feb 01 14:50:02 compute-0 epic_chaum[75507]:     ],
Feb 01 14:50:02 compute-0 epic_chaum[75507]:     "quorum_names": [
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "compute-0"
Feb 01 14:50:02 compute-0 epic_chaum[75507]:     ],
Feb 01 14:50:02 compute-0 epic_chaum[75507]:     "quorum_age": 2,
Feb 01 14:50:02 compute-0 epic_chaum[75507]:     "monmap": {
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "epoch": 1,
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "min_mon_release_name": "tentacle",
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "num_mons": 1
Feb 01 14:50:02 compute-0 epic_chaum[75507]:     },
Feb 01 14:50:02 compute-0 epic_chaum[75507]:     "osdmap": {
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "epoch": 1,
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "num_osds": 0,
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "num_up_osds": 0,
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "osd_up_since": 0,
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "num_in_osds": 0,
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "osd_in_since": 0,
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "num_remapped_pgs": 0
Feb 01 14:50:02 compute-0 epic_chaum[75507]:     },
Feb 01 14:50:02 compute-0 epic_chaum[75507]:     "pgmap": {
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "pgs_by_state": [],
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "num_pgs": 0,
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "num_pools": 0,
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "num_objects": 0,
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "data_bytes": 0,
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "bytes_used": 0,
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "bytes_avail": 0,
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "bytes_total": 0
Feb 01 14:50:02 compute-0 epic_chaum[75507]:     },
Feb 01 14:50:02 compute-0 epic_chaum[75507]:     "fsmap": {
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "epoch": 1,
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "btime": "2026-02-01T14:49:58:117399+0000",
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "by_rank": [],
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "up:standby": 0
Feb 01 14:50:02 compute-0 epic_chaum[75507]:     },
Feb 01 14:50:02 compute-0 epic_chaum[75507]:     "mgrmap": {
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "available": false,
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "num_standbys": 0,
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "modules": [
Feb 01 14:50:02 compute-0 epic_chaum[75507]:             "iostat",
Feb 01 14:50:02 compute-0 epic_chaum[75507]:             "nfs"
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         ],
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "services": {}
Feb 01 14:50:02 compute-0 epic_chaum[75507]:     },
Feb 01 14:50:02 compute-0 epic_chaum[75507]:     "servicemap": {
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "epoch": 1,
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "modified": "2026-02-01T14:49:58.120892+0000",
Feb 01 14:50:02 compute-0 epic_chaum[75507]:         "services": {}
Feb 01 14:50:02 compute-0 epic_chaum[75507]:     },
Feb 01 14:50:02 compute-0 epic_chaum[75507]:     "progress_events": {}
Feb 01 14:50:02 compute-0 epic_chaum[75507]: }
Feb 01 14:50:02 compute-0 systemd[1]: libpod-eef9cc41736c1f735bf02e1e87e719e99e7f79531b9630d721377c99ccecd961.scope: Deactivated successfully.
Feb 01 14:50:02 compute-0 podman[75470]: 2026-02-01 14:50:02.573608317 +0000 UTC m=+0.420844340 container died eef9cc41736c1f735bf02e1e87e719e99e7f79531b9630d721377c99ccecd961 (image=quay.io/ceph/ceph:v20, name=epic_chaum, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:50:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1aea80a6f2121aa55a109b745afad73d4d2520726ee91210655c39d326e4ef6-merged.mount: Deactivated successfully.
Feb 01 14:50:02 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/4243542664' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 01 14:50:02 compute-0 podman[75470]: 2026-02-01 14:50:02.618620661 +0000 UTC m=+0.465856644 container remove eef9cc41736c1f735bf02e1e87e719e99e7f79531b9630d721377c99ccecd961 (image=quay.io/ceph/ceph:v20, name=epic_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 01 14:50:02 compute-0 systemd[1]: libpod-conmon-eef9cc41736c1f735bf02e1e87e719e99e7f79531b9630d721377c99ccecd961.scope: Deactivated successfully.
Feb 01 14:50:02 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'crash'
Feb 01 14:50:03 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'dashboard'
Feb 01 14:50:03 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'devicehealth'
Feb 01 14:50:03 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'diskprediction_local'
Feb 01 14:50:03 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb 01 14:50:03 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb 01 14:50:03 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]:   from numpy import show_config as show_numpy_config
Feb 01 14:50:03 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'influx'
Feb 01 14:50:03 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'insights'
Feb 01 14:50:04 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'iostat'
Feb 01 14:50:04 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'k8sevents'
Feb 01 14:50:04 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'localpool'
Feb 01 14:50:04 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'mds_autoscaler'
Feb 01 14:50:04 compute-0 podman[75557]: 2026-02-01 14:50:04.711585815 +0000 UTC m=+0.068618462 container create 5d75cb4a7e90581c5778bd18d9ceacec2693e2a6d56e9c2a1093fb52ab53237c (image=quay.io/ceph/ceph:v20, name=angry_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:50:04 compute-0 systemd[1]: Started libpod-conmon-5d75cb4a7e90581c5778bd18d9ceacec2693e2a6d56e9c2a1093fb52ab53237c.scope.
Feb 01 14:50:04 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'mirroring'
Feb 01 14:50:04 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19e0b753068d03f70d6bf53583b4a4938912d4833d349c044b0121a7e932ab03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19e0b753068d03f70d6bf53583b4a4938912d4833d349c044b0121a7e932ab03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19e0b753068d03f70d6bf53583b4a4938912d4833d349c044b0121a7e932ab03/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:04 compute-0 podman[75557]: 2026-02-01 14:50:04.684358165 +0000 UTC m=+0.041390852 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:04 compute-0 podman[75557]: 2026-02-01 14:50:04.790122718 +0000 UTC m=+0.147155435 container init 5d75cb4a7e90581c5778bd18d9ceacec2693e2a6d56e9c2a1093fb52ab53237c (image=quay.io/ceph/ceph:v20, name=angry_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:04 compute-0 podman[75557]: 2026-02-01 14:50:04.793589266 +0000 UTC m=+0.150621913 container start 5d75cb4a7e90581c5778bd18d9ceacec2693e2a6d56e9c2a1093fb52ab53237c (image=quay.io/ceph/ceph:v20, name=angry_leavitt, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:50:04 compute-0 podman[75557]: 2026-02-01 14:50:04.797556328 +0000 UTC m=+0.154588995 container attach 5d75cb4a7e90581c5778bd18d9ceacec2693e2a6d56e9c2a1093fb52ab53237c (image=quay.io/ceph/ceph:v20, name=angry_leavitt, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 01 14:50:04 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'nfs'
Feb 01 14:50:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb 01 14:50:04 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1097144664' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 01 14:50:04 compute-0 angry_leavitt[75573]: 
Feb 01 14:50:04 compute-0 angry_leavitt[75573]: {
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:     "fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:     "health": {
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "status": "HEALTH_OK",
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "checks": {},
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "mutes": []
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:     },
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:     "election_epoch": 5,
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:     "quorum": [
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         0
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:     ],
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:     "quorum_names": [
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "compute-0"
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:     ],
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:     "quorum_age": 4,
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:     "monmap": {
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "epoch": 1,
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "min_mon_release_name": "tentacle",
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "num_mons": 1
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:     },
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:     "osdmap": {
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "epoch": 1,
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "num_osds": 0,
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "num_up_osds": 0,
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "osd_up_since": 0,
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "num_in_osds": 0,
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "osd_in_since": 0,
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "num_remapped_pgs": 0
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:     },
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:     "pgmap": {
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "pgs_by_state": [],
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "num_pgs": 0,
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "num_pools": 0,
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "num_objects": 0,
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "data_bytes": 0,
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "bytes_used": 0,
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "bytes_avail": 0,
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "bytes_total": 0
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:     },
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:     "fsmap": {
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "epoch": 1,
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "btime": "2026-02-01T14:49:58:117399+0000",
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "by_rank": [],
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "up:standby": 0
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:     },
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:     "mgrmap": {
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "available": false,
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "num_standbys": 0,
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "modules": [
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:             "iostat",
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:             "nfs"
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         ],
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "services": {}
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:     },
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:     "servicemap": {
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "epoch": 1,
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "modified": "2026-02-01T14:49:58.120892+0000",
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:         "services": {}
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:     },
Feb 01 14:50:04 compute-0 angry_leavitt[75573]:     "progress_events": {}
Feb 01 14:50:04 compute-0 angry_leavitt[75573]: }
Feb 01 14:50:04 compute-0 systemd[1]: libpod-5d75cb4a7e90581c5778bd18d9ceacec2693e2a6d56e9c2a1093fb52ab53237c.scope: Deactivated successfully.
Feb 01 14:50:05 compute-0 podman[75599]: 2026-02-01 14:50:05.02274794 +0000 UTC m=+0.021881240 container died 5d75cb4a7e90581c5778bd18d9ceacec2693e2a6d56e9c2a1093fb52ab53237c (image=quay.io/ceph/ceph:v20, name=angry_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:50:05 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1097144664' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 01 14:50:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-19e0b753068d03f70d6bf53583b4a4938912d4833d349c044b0121a7e932ab03-merged.mount: Deactivated successfully.
Feb 01 14:50:05 compute-0 podman[75599]: 2026-02-01 14:50:05.057560145 +0000 UTC m=+0.056693425 container remove 5d75cb4a7e90581c5778bd18d9ceacec2693e2a6d56e9c2a1093fb52ab53237c (image=quay.io/ceph/ceph:v20, name=angry_leavitt, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 01 14:50:05 compute-0 systemd[1]: libpod-conmon-5d75cb4a7e90581c5778bd18d9ceacec2693e2a6d56e9c2a1093fb52ab53237c.scope: Deactivated successfully.
Feb 01 14:50:05 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'orchestrator'
Feb 01 14:50:05 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'osd_perf_query'
Feb 01 14:50:05 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'osd_support'
Feb 01 14:50:05 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'pg_autoscaler'
Feb 01 14:50:05 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'progress'
Feb 01 14:50:05 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'prometheus'
Feb 01 14:50:05 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'rbd_support'
Feb 01 14:50:05 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'rgw'
Feb 01 14:50:06 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'rook'
Feb 01 14:50:06 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'selftest'
Feb 01 14:50:06 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'smb'
Feb 01 14:50:06 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'snap_schedule'
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'stats'
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'status'
Feb 01 14:50:07 compute-0 podman[75615]: 2026-02-01 14:50:07.142874953 +0000 UTC m=+0.057695064 container create 1e724c19485b4329ddaa207dc495ea928de78c6c3df59bc72a0005fd84103010 (image=quay.io/ceph/ceph:v20, name=nice_mcclintock, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:07 compute-0 systemd[1]: Started libpod-conmon-1e724c19485b4329ddaa207dc495ea928de78c6c3df59bc72a0005fd84103010.scope.
Feb 01 14:50:07 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:07 compute-0 podman[75615]: 2026-02-01 14:50:07.117719071 +0000 UTC m=+0.032539222 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea71c4c08ae8e24479ddc8aba0a4b74afcd55c83a86befade8c9495b287efa1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea71c4c08ae8e24479ddc8aba0a4b74afcd55c83a86befade8c9495b287efa1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea71c4c08ae8e24479ddc8aba0a4b74afcd55c83a86befade8c9495b287efa1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'telegraf'
Feb 01 14:50:07 compute-0 podman[75615]: 2026-02-01 14:50:07.248146592 +0000 UTC m=+0.162966753 container init 1e724c19485b4329ddaa207dc495ea928de78c6c3df59bc72a0005fd84103010 (image=quay.io/ceph/ceph:v20, name=nice_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 01 14:50:07 compute-0 podman[75615]: 2026-02-01 14:50:07.252588907 +0000 UTC m=+0.167409018 container start 1e724c19485b4329ddaa207dc495ea928de78c6c3df59bc72a0005fd84103010 (image=quay.io/ceph/ceph:v20, name=nice_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 01 14:50:07 compute-0 podman[75615]: 2026-02-01 14:50:07.256171999 +0000 UTC m=+0.170992120 container attach 1e724c19485b4329ddaa207dc495ea928de78c6c3df59bc72a0005fd84103010 (image=quay.io/ceph/ceph:v20, name=nice_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'telemetry'
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'test_orchestrator'
Feb 01 14:50:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb 01 14:50:07 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4260895271' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]: 
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]: {
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:     "fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:     "health": {
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "status": "HEALTH_OK",
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "checks": {},
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "mutes": []
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:     },
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:     "election_epoch": 5,
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:     "quorum": [
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         0
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:     ],
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:     "quorum_names": [
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "compute-0"
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:     ],
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:     "quorum_age": 7,
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:     "monmap": {
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "epoch": 1,
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "min_mon_release_name": "tentacle",
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "num_mons": 1
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:     },
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:     "osdmap": {
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "epoch": 1,
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "num_osds": 0,
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "num_up_osds": 0,
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "osd_up_since": 0,
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "num_in_osds": 0,
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "osd_in_since": 0,
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "num_remapped_pgs": 0
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:     },
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:     "pgmap": {
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "pgs_by_state": [],
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "num_pgs": 0,
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "num_pools": 0,
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "num_objects": 0,
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "data_bytes": 0,
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "bytes_used": 0,
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "bytes_avail": 0,
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "bytes_total": 0
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:     },
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:     "fsmap": {
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "epoch": 1,
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "btime": "2026-02-01T14:49:58:117399+0000",
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "by_rank": [],
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "up:standby": 0
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:     },
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:     "mgrmap": {
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "available": false,
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "num_standbys": 0,
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "modules": [
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:             "iostat",
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:             "nfs"
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         ],
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "services": {}
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:     },
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:     "servicemap": {
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "epoch": 1,
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "modified": "2026-02-01T14:49:58.120892+0000",
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:         "services": {}
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:     },
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]:     "progress_events": {}
Feb 01 14:50:07 compute-0 nice_mcclintock[75633]: }
Feb 01 14:50:07 compute-0 systemd[1]: libpod-1e724c19485b4329ddaa207dc495ea928de78c6c3df59bc72a0005fd84103010.scope: Deactivated successfully.
Feb 01 14:50:07 compute-0 podman[75615]: 2026-02-01 14:50:07.456992101 +0000 UTC m=+0.371812172 container died 1e724c19485b4329ddaa207dc495ea928de78c6c3df59bc72a0005fd84103010 (image=quay.io/ceph/ceph:v20, name=nice_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 01 14:50:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ea71c4c08ae8e24479ddc8aba0a4b74afcd55c83a86befade8c9495b287efa1-merged.mount: Deactivated successfully.
Feb 01 14:50:07 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/4260895271' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 01 14:50:07 compute-0 podman[75615]: 2026-02-01 14:50:07.496026186 +0000 UTC m=+0.410846257 container remove 1e724c19485b4329ddaa207dc495ea928de78c6c3df59bc72a0005fd84103010 (image=quay.io/ceph/ceph:v20, name=nice_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:50:07 compute-0 systemd[1]: libpod-conmon-1e724c19485b4329ddaa207dc495ea928de78c6c3df59bc72a0005fd84103010.scope: Deactivated successfully.
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'volumes'
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: ms_deliver_dispatch: unhandled message 0x56054db29860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb 01 14:50:07 compute-0 ceph-mon[75179]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.viosrg
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: mgr handle_mgr_map Activating!
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: mgr handle_mgr_map I am now activating
Feb 01 14:50:07 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.viosrg(active, starting, since 0.0122349s)
Feb 01 14:50:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb 01 14:50:07 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' cmd={"prefix": "mds metadata"} : dispatch
Feb 01 14:50:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).mds e1 all = 1
Feb 01 14:50:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb 01 14:50:07 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata"} : dispatch
Feb 01 14:50:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb 01 14:50:07 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' cmd={"prefix": "mon metadata"} : dispatch
Feb 01 14:50:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb 01 14:50:07 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb 01 14:50:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.viosrg", "id": "compute-0.viosrg"} v 0)
Feb 01 14:50:07 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' cmd={"prefix": "mgr metadata", "who": "compute-0.viosrg", "id": "compute-0.viosrg"} : dispatch
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: balancer
Feb 01 14:50:07 compute-0 ceph-mon[75179]: log_channel(cluster) log [INF] : Manager daemon compute-0.viosrg is now available
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: crash
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [balancer INFO root] Starting
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: devicehealth
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [devicehealth INFO root] Starting
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: iostat
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: nfs
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_14:50:07
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [balancer INFO root] No pools available
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: orchestrator
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: pg_autoscaler
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: progress
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [progress INFO root] Loading...
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [progress INFO root] No stored events to load
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [progress INFO root] Loaded [] historic events
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [progress INFO root] Loaded OSDMap, ready.
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [rbd_support INFO root] recovery thread starting
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [rbd_support INFO root] starting setup
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: rbd_support
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: status
Feb 01 14:50:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/mirror_snapshot_schedule"} v 0)
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:07 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/mirror_snapshot_schedule"} : dispatch
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: telemetry
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [rbd_support INFO root] PerfHandler: starting
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TaskHandler: starting
Feb 01 14:50:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/trash_purge_schedule"} v 0)
Feb 01 14:50:07 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/trash_purge_schedule"} : dispatch
Feb 01 14:50:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: [rbd_support INFO root] setup complete
Feb 01 14:50:07 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Feb 01 14:50:07 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Feb 01 14:50:07 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: volumes
Feb 01 14:50:07 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:08 compute-0 ceph-mon[75179]: Activating manager daemon compute-0.viosrg
Feb 01 14:50:08 compute-0 ceph-mon[75179]: mgrmap e2: compute-0.viosrg(active, starting, since 0.0122349s)
Feb 01 14:50:08 compute-0 ceph-mon[75179]: from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' cmd={"prefix": "mds metadata"} : dispatch
Feb 01 14:50:08 compute-0 ceph-mon[75179]: from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata"} : dispatch
Feb 01 14:50:08 compute-0 ceph-mon[75179]: from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' cmd={"prefix": "mon metadata"} : dispatch
Feb 01 14:50:08 compute-0 ceph-mon[75179]: from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb 01 14:50:08 compute-0 ceph-mon[75179]: from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' cmd={"prefix": "mgr metadata", "who": "compute-0.viosrg", "id": "compute-0.viosrg"} : dispatch
Feb 01 14:50:08 compute-0 ceph-mon[75179]: Manager daemon compute-0.viosrg is now available
Feb 01 14:50:08 compute-0 ceph-mon[75179]: from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/mirror_snapshot_schedule"} : dispatch
Feb 01 14:50:08 compute-0 ceph-mon[75179]: from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/trash_purge_schedule"} : dispatch
Feb 01 14:50:08 compute-0 ceph-mon[75179]: from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:08 compute-0 ceph-mon[75179]: from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:08 compute-0 ceph-mon[75179]: from='mgr.14102 192.168.122.100:0/2957817937' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:08 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.viosrg(active, since 1.02628s)
Feb 01 14:50:09 compute-0 podman[75749]: 2026-02-01 14:50:09.58053212 +0000 UTC m=+0.062066847 container create 59ece4d360d57cca5f7ad0456ac3f399f6ade7d0e26afd126ff1273fdfa2a10d (image=quay.io/ceph/ceph:v20, name=bold_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:09 compute-0 systemd[1]: Started libpod-conmon-59ece4d360d57cca5f7ad0456ac3f399f6ade7d0e26afd126ff1273fdfa2a10d.scope.
Feb 01 14:50:09 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26aae273b1f0f8666f1af297fa04772abc7da434b7d7659ca3e7107523f494e2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26aae273b1f0f8666f1af297fa04772abc7da434b7d7659ca3e7107523f494e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26aae273b1f0f8666f1af297fa04772abc7da434b7d7659ca3e7107523f494e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:09 compute-0 podman[75749]: 2026-02-01 14:50:09.553779803 +0000 UTC m=+0.035314590 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:09 compute-0 podman[75749]: 2026-02-01 14:50:09.667657296 +0000 UTC m=+0.149192083 container init 59ece4d360d57cca5f7ad0456ac3f399f6ade7d0e26afd126ff1273fdfa2a10d (image=quay.io/ceph/ceph:v20, name=bold_snyder, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 01 14:50:09 compute-0 podman[75749]: 2026-02-01 14:50:09.674812968 +0000 UTC m=+0.156347685 container start 59ece4d360d57cca5f7ad0456ac3f399f6ade7d0e26afd126ff1273fdfa2a10d (image=quay.io/ceph/ceph:v20, name=bold_snyder, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:50:09 compute-0 podman[75749]: 2026-02-01 14:50:09.678557104 +0000 UTC m=+0.160091881 container attach 59ece4d360d57cca5f7ad0456ac3f399f6ade7d0e26afd126ff1273fdfa2a10d (image=quay.io/ceph/ceph:v20, name=bold_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 01 14:50:09 compute-0 ceph-mgr[75469]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 01 14:50:09 compute-0 ceph-mon[75179]: mgrmap e3: compute-0.viosrg(active, since 1.02628s)
Feb 01 14:50:09 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:50:09 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.viosrg(active, since 2s)
Feb 01 14:50:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb 01 14:50:10 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2673905382' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 01 14:50:10 compute-0 bold_snyder[75766]: 
Feb 01 14:50:10 compute-0 bold_snyder[75766]: {
Feb 01 14:50:10 compute-0 bold_snyder[75766]:     "fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:50:10 compute-0 bold_snyder[75766]:     "health": {
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "status": "HEALTH_OK",
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "checks": {},
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "mutes": []
Feb 01 14:50:10 compute-0 bold_snyder[75766]:     },
Feb 01 14:50:10 compute-0 bold_snyder[75766]:     "election_epoch": 5,
Feb 01 14:50:10 compute-0 bold_snyder[75766]:     "quorum": [
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         0
Feb 01 14:50:10 compute-0 bold_snyder[75766]:     ],
Feb 01 14:50:10 compute-0 bold_snyder[75766]:     "quorum_names": [
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "compute-0"
Feb 01 14:50:10 compute-0 bold_snyder[75766]:     ],
Feb 01 14:50:10 compute-0 bold_snyder[75766]:     "quorum_age": 9,
Feb 01 14:50:10 compute-0 bold_snyder[75766]:     "monmap": {
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "epoch": 1,
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "min_mon_release_name": "tentacle",
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "num_mons": 1
Feb 01 14:50:10 compute-0 bold_snyder[75766]:     },
Feb 01 14:50:10 compute-0 bold_snyder[75766]:     "osdmap": {
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "epoch": 1,
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "num_osds": 0,
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "num_up_osds": 0,
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "osd_up_since": 0,
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "num_in_osds": 0,
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "osd_in_since": 0,
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "num_remapped_pgs": 0
Feb 01 14:50:10 compute-0 bold_snyder[75766]:     },
Feb 01 14:50:10 compute-0 bold_snyder[75766]:     "pgmap": {
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "pgs_by_state": [],
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "num_pgs": 0,
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "num_pools": 0,
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "num_objects": 0,
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "data_bytes": 0,
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "bytes_used": 0,
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "bytes_avail": 0,
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "bytes_total": 0
Feb 01 14:50:10 compute-0 bold_snyder[75766]:     },
Feb 01 14:50:10 compute-0 bold_snyder[75766]:     "fsmap": {
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "epoch": 1,
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "btime": "2026-02-01T14:49:58:117399+0000",
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "by_rank": [],
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "up:standby": 0
Feb 01 14:50:10 compute-0 bold_snyder[75766]:     },
Feb 01 14:50:10 compute-0 bold_snyder[75766]:     "mgrmap": {
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "available": true,
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "num_standbys": 0,
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "modules": [
Feb 01 14:50:10 compute-0 bold_snyder[75766]:             "iostat",
Feb 01 14:50:10 compute-0 bold_snyder[75766]:             "nfs"
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         ],
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "services": {}
Feb 01 14:50:10 compute-0 bold_snyder[75766]:     },
Feb 01 14:50:10 compute-0 bold_snyder[75766]:     "servicemap": {
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "epoch": 1,
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "modified": "2026-02-01T14:49:58.120892+0000",
Feb 01 14:50:10 compute-0 bold_snyder[75766]:         "services": {}
Feb 01 14:50:10 compute-0 bold_snyder[75766]:     },
Feb 01 14:50:10 compute-0 bold_snyder[75766]:     "progress_events": {}
Feb 01 14:50:10 compute-0 bold_snyder[75766]: }
Feb 01 14:50:10 compute-0 systemd[1]: libpod-59ece4d360d57cca5f7ad0456ac3f399f6ade7d0e26afd126ff1273fdfa2a10d.scope: Deactivated successfully.
Feb 01 14:50:10 compute-0 podman[75749]: 2026-02-01 14:50:10.201884963 +0000 UTC m=+0.683419740 container died 59ece4d360d57cca5f7ad0456ac3f399f6ade7d0e26afd126ff1273fdfa2a10d (image=quay.io/ceph/ceph:v20, name=bold_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 01 14:50:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-26aae273b1f0f8666f1af297fa04772abc7da434b7d7659ca3e7107523f494e2-merged.mount: Deactivated successfully.
Feb 01 14:50:10 compute-0 podman[75749]: 2026-02-01 14:50:10.247933136 +0000 UTC m=+0.729467863 container remove 59ece4d360d57cca5f7ad0456ac3f399f6ade7d0e26afd126ff1273fdfa2a10d (image=quay.io/ceph/ceph:v20, name=bold_snyder, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:50:10 compute-0 systemd[1]: libpod-conmon-59ece4d360d57cca5f7ad0456ac3f399f6ade7d0e26afd126ff1273fdfa2a10d.scope: Deactivated successfully.
Feb 01 14:50:10 compute-0 podman[75804]: 2026-02-01 14:50:10.327793736 +0000 UTC m=+0.057335534 container create 3479747999aa18ba24058b76e9fca7cf741c3d5860a25ed0780de1b25203a5f4 (image=quay.io/ceph/ceph:v20, name=elated_burnell, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 01 14:50:10 compute-0 systemd[1]: Started libpod-conmon-3479747999aa18ba24058b76e9fca7cf741c3d5860a25ed0780de1b25203a5f4.scope.
Feb 01 14:50:10 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d4d439d2b2dbc263272ea835868907988623b50408798ea071ab2da1692375/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:10 compute-0 podman[75804]: 2026-02-01 14:50:10.304097185 +0000 UTC m=+0.033639063 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d4d439d2b2dbc263272ea835868907988623b50408798ea071ab2da1692375/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d4d439d2b2dbc263272ea835868907988623b50408798ea071ab2da1692375/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d4d439d2b2dbc263272ea835868907988623b50408798ea071ab2da1692375/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:10 compute-0 podman[75804]: 2026-02-01 14:50:10.438666513 +0000 UTC m=+0.168208381 container init 3479747999aa18ba24058b76e9fca7cf741c3d5860a25ed0780de1b25203a5f4 (image=quay.io/ceph/ceph:v20, name=elated_burnell, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:10 compute-0 podman[75804]: 2026-02-01 14:50:10.442226194 +0000 UTC m=+0.171768002 container start 3479747999aa18ba24058b76e9fca7cf741c3d5860a25ed0780de1b25203a5f4 (image=quay.io/ceph/ceph:v20, name=elated_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:10 compute-0 podman[75804]: 2026-02-01 14:50:10.453644217 +0000 UTC m=+0.183186085 container attach 3479747999aa18ba24058b76e9fca7cf741c3d5860a25ed0780de1b25203a5f4 (image=quay.io/ceph/ceph:v20, name=elated_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 01 14:50:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb 01 14:50:10 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1846296928' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 01 14:50:10 compute-0 elated_burnell[75820]: 
Feb 01 14:50:10 compute-0 elated_burnell[75820]: [global]
Feb 01 14:50:10 compute-0 elated_burnell[75820]:         fsid = 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:50:10 compute-0 elated_burnell[75820]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Feb 01 14:50:10 compute-0 elated_burnell[75820]:         osd_crush_chooseleaf_type = 0
Feb 01 14:50:10 compute-0 systemd[1]: libpod-3479747999aa18ba24058b76e9fca7cf741c3d5860a25ed0780de1b25203a5f4.scope: Deactivated successfully.
Feb 01 14:50:10 compute-0 podman[75804]: 2026-02-01 14:50:10.865736968 +0000 UTC m=+0.595278776 container died 3479747999aa18ba24058b76e9fca7cf741c3d5860a25ed0780de1b25203a5f4 (image=quay.io/ceph/ceph:v20, name=elated_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 01 14:50:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-06d4d439d2b2dbc263272ea835868907988623b50408798ea071ab2da1692375-merged.mount: Deactivated successfully.
Feb 01 14:50:10 compute-0 podman[75804]: 2026-02-01 14:50:10.904573437 +0000 UTC m=+0.634115205 container remove 3479747999aa18ba24058b76e9fca7cf741c3d5860a25ed0780de1b25203a5f4 (image=quay.io/ceph/ceph:v20, name=elated_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:50:10 compute-0 systemd[1]: libpod-conmon-3479747999aa18ba24058b76e9fca7cf741c3d5860a25ed0780de1b25203a5f4.scope: Deactivated successfully.
Feb 01 14:50:10 compute-0 ceph-mon[75179]: mgrmap e4: compute-0.viosrg(active, since 2s)
Feb 01 14:50:10 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2673905382' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 01 14:50:10 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1846296928' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 01 14:50:10 compute-0 podman[75858]: 2026-02-01 14:50:10.955325833 +0000 UTC m=+0.036070532 container create 6cab154545a7d8f579919c455838de7081f4763adbac5aa5ec87bf3095ac05fa (image=quay.io/ceph/ceph:v20, name=intelligent_beaver, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 01 14:50:10 compute-0 systemd[1]: Started libpod-conmon-6cab154545a7d8f579919c455838de7081f4763adbac5aa5ec87bf3095ac05fa.scope.
Feb 01 14:50:11 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a10726461299c3917ab90f0ac6239a0a25b53d509537cfe62951d942bcfbfc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a10726461299c3917ab90f0ac6239a0a25b53d509537cfe62951d942bcfbfc1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a10726461299c3917ab90f0ac6239a0a25b53d509537cfe62951d942bcfbfc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:11 compute-0 podman[75858]: 2026-02-01 14:50:10.939111574 +0000 UTC m=+0.019856343 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:11 compute-0 podman[75858]: 2026-02-01 14:50:11.038014973 +0000 UTC m=+0.118759752 container init 6cab154545a7d8f579919c455838de7081f4763adbac5aa5ec87bf3095ac05fa (image=quay.io/ceph/ceph:v20, name=intelligent_beaver, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:50:11 compute-0 podman[75858]: 2026-02-01 14:50:11.04427769 +0000 UTC m=+0.125022409 container start 6cab154545a7d8f579919c455838de7081f4763adbac5aa5ec87bf3095ac05fa (image=quay.io/ceph/ceph:v20, name=intelligent_beaver, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:50:11 compute-0 podman[75858]: 2026-02-01 14:50:11.047382588 +0000 UTC m=+0.128127307 container attach 6cab154545a7d8f579919c455838de7081f4763adbac5aa5ec87bf3095ac05fa (image=quay.io/ceph/ceph:v20, name=intelligent_beaver, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 01 14:50:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Feb 01 14:50:11 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/266671715' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Feb 01 14:50:11 compute-0 ceph-mgr[75469]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 01 14:50:11 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:50:11 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/266671715' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Feb 01 14:50:11 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/266671715' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Feb 01 14:50:11 compute-0 ceph-mgr[75469]: mgr handle_mgr_map respawning because set of enabled modules changed!
Feb 01 14:50:11 compute-0 ceph-mgr[75469]: mgr respawn  e: '/usr/bin/ceph-mgr'
Feb 01 14:50:11 compute-0 ceph-mgr[75469]: mgr respawn  0: '/usr/bin/ceph-mgr'
Feb 01 14:50:11 compute-0 ceph-mgr[75469]: mgr respawn  1: '-n'
Feb 01 14:50:11 compute-0 ceph-mgr[75469]: mgr respawn  2: 'mgr.compute-0.viosrg'
Feb 01 14:50:11 compute-0 ceph-mgr[75469]: mgr respawn  3: '-f'
Feb 01 14:50:11 compute-0 ceph-mgr[75469]: mgr respawn  4: '--setuser'
Feb 01 14:50:11 compute-0 ceph-mgr[75469]: mgr respawn  5: 'ceph'
Feb 01 14:50:11 compute-0 ceph-mgr[75469]: mgr respawn  6: '--setgroup'
Feb 01 14:50:11 compute-0 ceph-mgr[75469]: mgr respawn  7: 'ceph'
Feb 01 14:50:11 compute-0 ceph-mgr[75469]: mgr respawn  8: '--default-log-to-file=false'
Feb 01 14:50:11 compute-0 ceph-mgr[75469]: mgr respawn  9: '--default-log-to-journald=true'
Feb 01 14:50:11 compute-0 ceph-mgr[75469]: mgr respawn  10: '--default-log-to-stderr=false'
Feb 01 14:50:11 compute-0 ceph-mgr[75469]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Feb 01 14:50:11 compute-0 ceph-mgr[75469]: mgr respawn  exe_path /proc/self/exe
Feb 01 14:50:11 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.viosrg(active, since 4s)
Feb 01 14:50:11 compute-0 systemd[1]: libpod-6cab154545a7d8f579919c455838de7081f4763adbac5aa5ec87bf3095ac05fa.scope: Deactivated successfully.
Feb 01 14:50:11 compute-0 podman[75858]: 2026-02-01 14:50:11.976599842 +0000 UTC m=+1.057344521 container died 6cab154545a7d8f579919c455838de7081f4763adbac5aa5ec87bf3095ac05fa (image=quay.io/ceph/ceph:v20, name=intelligent_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:50:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a10726461299c3917ab90f0ac6239a0a25b53d509537cfe62951d942bcfbfc1-merged.mount: Deactivated successfully.
Feb 01 14:50:12 compute-0 podman[75858]: 2026-02-01 14:50:12.01610085 +0000 UTC m=+1.096845559 container remove 6cab154545a7d8f579919c455838de7081f4763adbac5aa5ec87bf3095ac05fa (image=quay.io/ceph/ceph:v20, name=intelligent_beaver, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:12 compute-0 systemd[1]: libpod-conmon-6cab154545a7d8f579919c455838de7081f4763adbac5aa5ec87bf3095ac05fa.scope: Deactivated successfully.
Feb 01 14:50:12 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: ignoring --setuser ceph since I am not root
Feb 01 14:50:12 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: ignoring --setgroup ceph since I am not root
Feb 01 14:50:12 compute-0 ceph-mgr[75469]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Feb 01 14:50:12 compute-0 ceph-mgr[75469]: pidfile_write: ignore empty --pid-file
Feb 01 14:50:12 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'alerts'
Feb 01 14:50:12 compute-0 podman[75912]: 2026-02-01 14:50:12.091152224 +0000 UTC m=+0.053408353 container create 185ef54968d0b450708913937b32ce63a1bac4e3776afd11101850b5499cf46a (image=quay.io/ceph/ceph:v20, name=peaceful_williamson, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:50:12 compute-0 systemd[1]: Started libpod-conmon-185ef54968d0b450708913937b32ce63a1bac4e3776afd11101850b5499cf46a.scope.
Feb 01 14:50:12 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f52d222115b73629c4d1725d1d6b20dda7aac2a6c1264c1593d88411da66c861/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f52d222115b73629c4d1725d1d6b20dda7aac2a6c1264c1593d88411da66c861/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f52d222115b73629c4d1725d1d6b20dda7aac2a6c1264c1593d88411da66c861/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:12 compute-0 podman[75912]: 2026-02-01 14:50:12.069364167 +0000 UTC m=+0.031620336 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:12 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'balancer'
Feb 01 14:50:12 compute-0 podman[75912]: 2026-02-01 14:50:12.168470571 +0000 UTC m=+0.130726710 container init 185ef54968d0b450708913937b32ce63a1bac4e3776afd11101850b5499cf46a (image=quay.io/ceph/ceph:v20, name=peaceful_williamson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:50:12 compute-0 podman[75912]: 2026-02-01 14:50:12.173673179 +0000 UTC m=+0.135929298 container start 185ef54968d0b450708913937b32ce63a1bac4e3776afd11101850b5499cf46a (image=quay.io/ceph/ceph:v20, name=peaceful_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 01 14:50:12 compute-0 podman[75912]: 2026-02-01 14:50:12.176798597 +0000 UTC m=+0.139054716 container attach 185ef54968d0b450708913937b32ce63a1bac4e3776afd11101850b5499cf46a (image=quay.io/ceph/ceph:v20, name=peaceful_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 01 14:50:12 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'cephadm'
Feb 01 14:50:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb 01 14:50:12 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2230403667' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb 01 14:50:12 compute-0 peaceful_williamson[75948]: {
Feb 01 14:50:12 compute-0 peaceful_williamson[75948]:     "epoch": 5,
Feb 01 14:50:12 compute-0 peaceful_williamson[75948]:     "available": true,
Feb 01 14:50:12 compute-0 peaceful_williamson[75948]:     "active_name": "compute-0.viosrg",
Feb 01 14:50:12 compute-0 peaceful_williamson[75948]:     "num_standby": 0
Feb 01 14:50:12 compute-0 peaceful_williamson[75948]: }
Feb 01 14:50:12 compute-0 systemd[1]: libpod-185ef54968d0b450708913937b32ce63a1bac4e3776afd11101850b5499cf46a.scope: Deactivated successfully.
Feb 01 14:50:12 compute-0 podman[75912]: 2026-02-01 14:50:12.635589809 +0000 UTC m=+0.597845968 container died 185ef54968d0b450708913937b32ce63a1bac4e3776afd11101850b5499cf46a (image=quay.io/ceph/ceph:v20, name=peaceful_williamson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:50:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f52d222115b73629c4d1725d1d6b20dda7aac2a6c1264c1593d88411da66c861-merged.mount: Deactivated successfully.
Feb 01 14:50:12 compute-0 podman[75912]: 2026-02-01 14:50:12.665948979 +0000 UTC m=+0.628205098 container remove 185ef54968d0b450708913937b32ce63a1bac4e3776afd11101850b5499cf46a (image=quay.io/ceph/ceph:v20, name=peaceful_williamson, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 01 14:50:12 compute-0 systemd[1]: libpod-conmon-185ef54968d0b450708913937b32ce63a1bac4e3776afd11101850b5499cf46a.scope: Deactivated successfully.
Feb 01 14:50:12 compute-0 podman[75996]: 2026-02-01 14:50:12.737857933 +0000 UTC m=+0.052471875 container create f3e3e10e0042e599788f0e9ea8692a252d631d97d4eafa36e7ba3faa68849441 (image=quay.io/ceph/ceph:v20, name=dreamy_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:50:12 compute-0 systemd[1]: Started libpod-conmon-f3e3e10e0042e599788f0e9ea8692a252d631d97d4eafa36e7ba3faa68849441.scope.
Feb 01 14:50:12 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83461617f47c3d4fa89a34d8d107ab9f6a304424a37c106255ce0771e6c3faf8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83461617f47c3d4fa89a34d8d107ab9f6a304424a37c106255ce0771e6c3faf8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83461617f47c3d4fa89a34d8d107ab9f6a304424a37c106255ce0771e6c3faf8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:12 compute-0 podman[75996]: 2026-02-01 14:50:12.819265547 +0000 UTC m=+0.133879489 container init f3e3e10e0042e599788f0e9ea8692a252d631d97d4eafa36e7ba3faa68849441 (image=quay.io/ceph/ceph:v20, name=dreamy_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:50:12 compute-0 podman[75996]: 2026-02-01 14:50:12.716436397 +0000 UTC m=+0.031050399 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:12 compute-0 podman[75996]: 2026-02-01 14:50:12.825263427 +0000 UTC m=+0.139877349 container start f3e3e10e0042e599788f0e9ea8692a252d631d97d4eafa36e7ba3faa68849441 (image=quay.io/ceph/ceph:v20, name=dreamy_haslett, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 01 14:50:12 compute-0 podman[75996]: 2026-02-01 14:50:12.828766946 +0000 UTC m=+0.143380958 container attach f3e3e10e0042e599788f0e9ea8692a252d631d97d4eafa36e7ba3faa68849441 (image=quay.io/ceph/ceph:v20, name=dreamy_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 01 14:50:12 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'crash'
Feb 01 14:50:12 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/266671715' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Feb 01 14:50:12 compute-0 ceph-mon[75179]: mgrmap e5: compute-0.viosrg(active, since 4s)
Feb 01 14:50:12 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2230403667' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb 01 14:50:12 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'dashboard'
Feb 01 14:50:13 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'devicehealth'
Feb 01 14:50:13 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'diskprediction_local'
Feb 01 14:50:13 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb 01 14:50:13 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb 01 14:50:13 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]:   from numpy import show_config as show_numpy_config
Feb 01 14:50:13 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'influx'
Feb 01 14:50:13 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'insights'
Feb 01 14:50:13 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'iostat'
Feb 01 14:50:13 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'k8sevents'
Feb 01 14:50:14 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'localpool'
Feb 01 14:50:14 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'mds_autoscaler'
Feb 01 14:50:14 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'mirroring'
Feb 01 14:50:14 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'nfs'
Feb 01 14:50:14 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'orchestrator'
Feb 01 14:50:15 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'osd_perf_query'
Feb 01 14:50:15 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'osd_support'
Feb 01 14:50:15 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'pg_autoscaler'
Feb 01 14:50:15 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'progress'
Feb 01 14:50:15 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'prometheus'
Feb 01 14:50:15 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'rbd_support'
Feb 01 14:50:15 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'rgw'
Feb 01 14:50:15 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'rook'
Feb 01 14:50:16 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'selftest'
Feb 01 14:50:16 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'smb'
Feb 01 14:50:16 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'snap_schedule'
Feb 01 14:50:16 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'stats'
Feb 01 14:50:16 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'status'
Feb 01 14:50:17 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'telegraf'
Feb 01 14:50:17 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'telemetry'
Feb 01 14:50:17 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'test_orchestrator'
Feb 01 14:50:17 compute-0 ceph-mgr[75469]: mgr[py] Loading python module 'volumes'
Feb 01 14:50:17 compute-0 ceph-mon[75179]: log_channel(cluster) log [INF] : Active manager daemon compute-0.viosrg restarted
Feb 01 14:50:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Feb 01 14:50:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 01 14:50:17 compute-0 ceph-mon[75179]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.viosrg
Feb 01 14:50:17 compute-0 ceph-mgr[75469]: ms_deliver_dispatch: unhandled message 0x55f8f16fc000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb 01 14:50:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Feb 01 14:50:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Feb 01 14:50:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Feb 01 14:50:17 compute-0 ceph-mgr[75469]: mgr handle_mgr_map Activating!
Feb 01 14:50:17 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Feb 01 14:50:17 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.viosrg(active, starting, since 0.0228883s)
Feb 01 14:50:17 compute-0 ceph-mgr[75469]: mgr handle_mgr_map I am now activating
Feb 01 14:50:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb 01 14:50:17 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb 01 14:50:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.viosrg", "id": "compute-0.viosrg"} v 0)
Feb 01 14:50:17 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mgr metadata", "who": "compute-0.viosrg", "id": "compute-0.viosrg"} : dispatch
Feb 01 14:50:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb 01 14:50:17 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mds metadata"} : dispatch
Feb 01 14:50:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).mds e1 all = 1
Feb 01 14:50:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb 01 14:50:17 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata"} : dispatch
Feb 01 14:50:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb 01 14:50:17 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mon metadata"} : dispatch
Feb 01 14:50:17 compute-0 ceph-mgr[75469]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:17 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: balancer
Feb 01 14:50:17 compute-0 ceph-mon[75179]: log_channel(cluster) log [INF] : Manager daemon compute-0.viosrg is now available
Feb 01 14:50:17 compute-0 ceph-mgr[75469]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Starting
Feb 01 14:50:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_14:50:17
Feb 01 14:50:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 14:50:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 14:50:17 compute-0 ceph-mgr[75469]: [balancer INFO root] No pools available
Feb 01 14:50:17 compute-0 ceph-mon[75179]: Active manager daemon compute-0.viosrg restarted
Feb 01 14:50:17 compute-0 ceph-mon[75179]: Activating manager daemon compute-0.viosrg
Feb 01 14:50:17 compute-0 ceph-mon[75179]: osdmap e2: 0 total, 0 up, 0 in
Feb 01 14:50:17 compute-0 ceph-mon[75179]: mgrmap e6: compute-0.viosrg(active, starting, since 0.0228883s)
Feb 01 14:50:17 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb 01 14:50:17 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mgr metadata", "who": "compute-0.viosrg", "id": "compute-0.viosrg"} : dispatch
Feb 01 14:50:17 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mds metadata"} : dispatch
Feb 01 14:50:17 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata"} : dispatch
Feb 01 14:50:17 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mon metadata"} : dispatch
Feb 01 14:50:17 compute-0 ceph-mon[75179]: Manager daemon compute-0.viosrg is now available
Feb 01 14:50:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Feb 01 14:50:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Feb 01 14:50:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Feb 01 14:50:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Feb 01 14:50:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Feb 01 14:50:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: cephadm
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: crash
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: devicehealth
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [devicehealth INFO root] Starting
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: iostat
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: nfs
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: orchestrator
Feb 01 14:50:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 01 14:50:18 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: pg_autoscaler
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: progress
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [progress INFO root] Loading...
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [progress INFO root] No stored events to load
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [progress INFO root] Loaded [] historic events
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [progress INFO root] Loaded OSDMap, ready.
Feb 01 14:50:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 01 14:50:18 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] recovery thread starting
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] starting setup
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: rbd_support
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: status
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: telemetry
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 01 14:50:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/mirror_snapshot_schedule"} v 0)
Feb 01 14:50:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/mirror_snapshot_schedule"} : dispatch
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] PerfHandler: starting
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TaskHandler: starting
Feb 01 14:50:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/trash_purge_schedule"} v 0)
Feb 01 14:50:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/trash_purge_schedule"} : dispatch
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] setup complete
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: mgr load Constructed class from module: volumes
Feb 01 14:50:18 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.viosrg(active, since 1.0323s)
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Feb 01 14:50:18 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Feb 01 14:50:18 compute-0 dreamy_haslett[76013]: {
Feb 01 14:50:18 compute-0 dreamy_haslett[76013]:     "mgrmap_epoch": 7,
Feb 01 14:50:18 compute-0 dreamy_haslett[76013]:     "initialized": true
Feb 01 14:50:18 compute-0 dreamy_haslett[76013]: }
Feb 01 14:50:18 compute-0 systemd[1]: libpod-f3e3e10e0042e599788f0e9ea8692a252d631d97d4eafa36e7ba3faa68849441.scope: Deactivated successfully.
Feb 01 14:50:18 compute-0 podman[76148]: 2026-02-01 14:50:18.787286382 +0000 UTC m=+0.038803549 container died f3e3e10e0042e599788f0e9ea8692a252d631d97d4eafa36e7ba3faa68849441 (image=quay.io/ceph/ceph:v20, name=dreamy_haslett, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:50:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-83461617f47c3d4fa89a34d8d107ab9f6a304424a37c106255ce0771e6c3faf8-merged.mount: Deactivated successfully.
Feb 01 14:50:18 compute-0 podman[76148]: 2026-02-01 14:50:18.825467103 +0000 UTC m=+0.076984270 container remove f3e3e10e0042e599788f0e9ea8692a252d631d97d4eafa36e7ba3faa68849441 (image=quay.io/ceph/ceph:v20, name=dreamy_haslett, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 01 14:50:18 compute-0 systemd[1]: libpod-conmon-f3e3e10e0042e599788f0e9ea8692a252d631d97d4eafa36e7ba3faa68849441.scope: Deactivated successfully.
Feb 01 14:50:18 compute-0 podman[76162]: 2026-02-01 14:50:18.906226828 +0000 UTC m=+0.055750188 container create 1d4d3a5e1ef5078d7296346ff14a3346ff13b4b93f4a5d25d4f4041d8927b16f (image=quay.io/ceph/ceph:v20, name=blissful_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Feb 01 14:50:18 compute-0 systemd[1]: Started libpod-conmon-1d4d3a5e1ef5078d7296346ff14a3346ff13b4b93f4a5d25d4f4041d8927b16f.scope.
Feb 01 14:50:18 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682979f9801aa01d60ab876d461b840724401844365c0ea86ab7b070292fe4aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682979f9801aa01d60ab876d461b840724401844365c0ea86ab7b070292fe4aa/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682979f9801aa01d60ab876d461b840724401844365c0ea86ab7b070292fe4aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:18 compute-0 podman[76162]: 2026-02-01 14:50:18.881343304 +0000 UTC m=+0.030866714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:18 compute-0 podman[76162]: 2026-02-01 14:50:18.996350418 +0000 UTC m=+0.145873748 container init 1d4d3a5e1ef5078d7296346ff14a3346ff13b4b93f4a5d25d4f4041d8927b16f (image=quay.io/ceph/ceph:v20, name=blissful_leakey, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 01 14:50:19 compute-0 podman[76162]: 2026-02-01 14:50:19.002539664 +0000 UTC m=+0.152062994 container start 1d4d3a5e1ef5078d7296346ff14a3346ff13b4b93f4a5d25d4f4041d8927b16f (image=quay.io/ceph/ceph:v20, name=blissful_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:50:19 compute-0 podman[76162]: 2026-02-01 14:50:19.005889898 +0000 UTC m=+0.155413258 container attach 1d4d3a5e1ef5078d7296346ff14a3346ff13b4b93f4a5d25d4f4041d8927b16f (image=quay.io/ceph/ceph:v20, name=blissful_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:50:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:19 compute-0 ceph-mon[75179]: Found migration_current of "None". Setting to last migration.
Feb 01 14:50:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 01 14:50:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 01 14:50:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/mirror_snapshot_schedule"} : dispatch
Feb 01 14:50:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.viosrg/trash_purge_schedule"} : dispatch
Feb 01 14:50:19 compute-0 ceph-mon[75179]: mgrmap e7: compute-0.viosrg(active, since 1.0323s)
Feb 01 14:50:19 compute-0 ceph-mon[75179]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Feb 01 14:50:19 compute-0 ceph-mon[75179]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Feb 01 14:50:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Feb 01 14:50:19 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2570491495' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Feb 01 14:50:19 compute-0 ceph-mgr[75469]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 01 14:50:19 compute-0 ceph-mgr[75469]: [cephadm INFO cherrypy.error] [01/Feb/2026:14:50:19] ENGINE Bus STARTING
Feb 01 14:50:19 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : [01/Feb/2026:14:50:19] ENGINE Bus STARTING
Feb 01 14:50:19 compute-0 ceph-mgr[75469]: [cephadm INFO cherrypy.error] [01/Feb/2026:14:50:19] ENGINE Serving on https://192.168.122.100:7150
Feb 01 14:50:19 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : [01/Feb/2026:14:50:19] ENGINE Serving on https://192.168.122.100:7150
Feb 01 14:50:19 compute-0 ceph-mgr[75469]: [cephadm INFO cherrypy.error] [01/Feb/2026:14:50:19] ENGINE Client ('192.168.122.100', 46906) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 01 14:50:19 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : [01/Feb/2026:14:50:19] ENGINE Client ('192.168.122.100', 46906) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 01 14:50:19 compute-0 ceph-mgr[75469]: [cephadm INFO cherrypy.error] [01/Feb/2026:14:50:19] ENGINE Serving on http://192.168.122.100:8765
Feb 01 14:50:19 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : [01/Feb/2026:14:50:19] ENGINE Serving on http://192.168.122.100:8765
Feb 01 14:50:19 compute-0 ceph-mgr[75469]: [cephadm INFO cherrypy.error] [01/Feb/2026:14:50:19] ENGINE Bus STARTED
Feb 01 14:50:19 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : [01/Feb/2026:14:50:19] ENGINE Bus STARTED
Feb 01 14:50:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 01 14:50:19 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 01 14:50:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019899420 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:50:20 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2570491495' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Feb 01 14:50:20 compute-0 ceph-mon[75179]: [01/Feb/2026:14:50:19] ENGINE Bus STARTING
Feb 01 14:50:20 compute-0 ceph-mon[75179]: [01/Feb/2026:14:50:19] ENGINE Serving on https://192.168.122.100:7150
Feb 01 14:50:20 compute-0 ceph-mon[75179]: [01/Feb/2026:14:50:19] ENGINE Client ('192.168.122.100', 46906) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 01 14:50:20 compute-0 ceph-mon[75179]: [01/Feb/2026:14:50:19] ENGINE Serving on http://192.168.122.100:8765
Feb 01 14:50:20 compute-0 ceph-mon[75179]: [01/Feb/2026:14:50:19] ENGINE Bus STARTED
Feb 01 14:50:20 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 01 14:50:20 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2570491495' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Feb 01 14:50:20 compute-0 blissful_leakey[76178]: module 'orchestrator' is already enabled (always-on)
Feb 01 14:50:20 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.viosrg(active, since 2s)
Feb 01 14:50:20 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:50:20 compute-0 systemd[1]: libpod-1d4d3a5e1ef5078d7296346ff14a3346ff13b4b93f4a5d25d4f4041d8927b16f.scope: Deactivated successfully.
Feb 01 14:50:20 compute-0 podman[76162]: 2026-02-01 14:50:20.442947622 +0000 UTC m=+1.592471002 container died 1d4d3a5e1ef5078d7296346ff14a3346ff13b4b93f4a5d25d4f4041d8927b16f (image=quay.io/ceph/ceph:v20, name=blissful_leakey, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:50:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-682979f9801aa01d60ab876d461b840724401844365c0ea86ab7b070292fe4aa-merged.mount: Deactivated successfully.
Feb 01 14:50:20 compute-0 podman[76162]: 2026-02-01 14:50:20.483820738 +0000 UTC m=+1.633344098 container remove 1d4d3a5e1ef5078d7296346ff14a3346ff13b4b93f4a5d25d4f4041d8927b16f (image=quay.io/ceph/ceph:v20, name=blissful_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 01 14:50:20 compute-0 systemd[1]: libpod-conmon-1d4d3a5e1ef5078d7296346ff14a3346ff13b4b93f4a5d25d4f4041d8927b16f.scope: Deactivated successfully.
Feb 01 14:50:20 compute-0 podman[76239]: 2026-02-01 14:50:20.551410891 +0000 UTC m=+0.047195837 container create 2931722d0015a9347ffea3ac6d36b8ead5f52149334742b4e246db2df8c32aa5 (image=quay.io/ceph/ceph:v20, name=inspiring_haslett, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 01 14:50:20 compute-0 systemd[1]: Started libpod-conmon-2931722d0015a9347ffea3ac6d36b8ead5f52149334742b4e246db2df8c32aa5.scope.
Feb 01 14:50:20 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca482dba53fa5134ab5afcb87716f065b3d8e4488e7cdcffbd3c8ec96c468785/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca482dba53fa5134ab5afcb87716f065b3d8e4488e7cdcffbd3c8ec96c468785/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca482dba53fa5134ab5afcb87716f065b3d8e4488e7cdcffbd3c8ec96c468785/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:20 compute-0 podman[76239]: 2026-02-01 14:50:20.617188962 +0000 UTC m=+0.112973938 container init 2931722d0015a9347ffea3ac6d36b8ead5f52149334742b4e246db2df8c32aa5 (image=quay.io/ceph/ceph:v20, name=inspiring_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 01 14:50:20 compute-0 podman[76239]: 2026-02-01 14:50:20.622159353 +0000 UTC m=+0.117944319 container start 2931722d0015a9347ffea3ac6d36b8ead5f52149334742b4e246db2df8c32aa5 (image=quay.io/ceph/ceph:v20, name=inspiring_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 01 14:50:20 compute-0 podman[76239]: 2026-02-01 14:50:20.62593984 +0000 UTC m=+0.121724756 container attach 2931722d0015a9347ffea3ac6d36b8ead5f52149334742b4e246db2df8c32aa5 (image=quay.io/ceph/ceph:v20, name=inspiring_haslett, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 01 14:50:20 compute-0 podman[76239]: 2026-02-01 14:50:20.534843552 +0000 UTC m=+0.030628478 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:21 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Feb 01 14:50:21 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:21 compute-0 systemd[1]: libpod-2931722d0015a9347ffea3ac6d36b8ead5f52149334742b4e246db2df8c32aa5.scope: Deactivated successfully.
Feb 01 14:50:21 compute-0 podman[76239]: 2026-02-01 14:50:21.033816452 +0000 UTC m=+0.529601398 container died 2931722d0015a9347ffea3ac6d36b8ead5f52149334742b4e246db2df8c32aa5 (image=quay.io/ceph/ceph:v20, name=inspiring_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 01 14:50:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 01 14:50:21 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 01 14:50:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca482dba53fa5134ab5afcb87716f065b3d8e4488e7cdcffbd3c8ec96c468785-merged.mount: Deactivated successfully.
Feb 01 14:50:21 compute-0 podman[76239]: 2026-02-01 14:50:21.06840197 +0000 UTC m=+0.564186906 container remove 2931722d0015a9347ffea3ac6d36b8ead5f52149334742b4e246db2df8c32aa5 (image=quay.io/ceph/ceph:v20, name=inspiring_haslett, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 01 14:50:21 compute-0 systemd[1]: libpod-conmon-2931722d0015a9347ffea3ac6d36b8ead5f52149334742b4e246db2df8c32aa5.scope: Deactivated successfully.
Feb 01 14:50:21 compute-0 podman[76293]: 2026-02-01 14:50:21.131800224 +0000 UTC m=+0.047935157 container create 8d9795167d45fe3b50d3b9694285607dd7cc9fb87b61d23c262233f164e3dea9 (image=quay.io/ceph/ceph:v20, name=dazzling_ritchie, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 01 14:50:21 compute-0 systemd[1]: Started libpod-conmon-8d9795167d45fe3b50d3b9694285607dd7cc9fb87b61d23c262233f164e3dea9.scope.
Feb 01 14:50:21 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/426bcd97ce9acb78f0906da95f36152e19556d4a68c0707cd5956c79f2d4d37a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/426bcd97ce9acb78f0906da95f36152e19556d4a68c0707cd5956c79f2d4d37a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/426bcd97ce9acb78f0906da95f36152e19556d4a68c0707cd5956c79f2d4d37a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:21 compute-0 podman[76293]: 2026-02-01 14:50:21.111959023 +0000 UTC m=+0.028093966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:21 compute-0 podman[76293]: 2026-02-01 14:50:21.213124726 +0000 UTC m=+0.129259669 container init 8d9795167d45fe3b50d3b9694285607dd7cc9fb87b61d23c262233f164e3dea9 (image=quay.io/ceph/ceph:v20, name=dazzling_ritchie, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:50:21 compute-0 podman[76293]: 2026-02-01 14:50:21.219852416 +0000 UTC m=+0.135987369 container start 8d9795167d45fe3b50d3b9694285607dd7cc9fb87b61d23c262233f164e3dea9 (image=quay.io/ceph/ceph:v20, name=dazzling_ritchie, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:50:21 compute-0 podman[76293]: 2026-02-01 14:50:21.224212369 +0000 UTC m=+0.140347302 container attach 8d9795167d45fe3b50d3b9694285607dd7cc9fb87b61d23c262233f164e3dea9 (image=quay.io/ceph/ceph:v20, name=dazzling_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 01 14:50:21 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2570491495' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Feb 01 14:50:21 compute-0 ceph-mon[75179]: mgrmap e8: compute-0.viosrg(active, since 2s)
Feb 01 14:50:21 compute-0 ceph-mon[75179]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:21 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:21 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 01 14:50:21 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Feb 01 14:50:21 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:21 compute-0 ceph-mgr[75469]: [cephadm INFO root] Set ssh ssh_user
Feb 01 14:50:21 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Feb 01 14:50:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Feb 01 14:50:21 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:21 compute-0 ceph-mgr[75469]: [cephadm INFO root] Set ssh ssh_config
Feb 01 14:50:21 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Feb 01 14:50:21 compute-0 ceph-mgr[75469]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Feb 01 14:50:21 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Feb 01 14:50:21 compute-0 dazzling_ritchie[76310]: ssh user set to ceph-admin. sudo will be used
Feb 01 14:50:21 compute-0 systemd[1]: libpod-8d9795167d45fe3b50d3b9694285607dd7cc9fb87b61d23c262233f164e3dea9.scope: Deactivated successfully.
Feb 01 14:50:21 compute-0 podman[76293]: 2026-02-01 14:50:21.633869111 +0000 UTC m=+0.550004054 container died 8d9795167d45fe3b50d3b9694285607dd7cc9fb87b61d23c262233f164e3dea9 (image=quay.io/ceph/ceph:v20, name=dazzling_ritchie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Feb 01 14:50:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-426bcd97ce9acb78f0906da95f36152e19556d4a68c0707cd5956c79f2d4d37a-merged.mount: Deactivated successfully.
Feb 01 14:50:21 compute-0 podman[76293]: 2026-02-01 14:50:21.6698519 +0000 UTC m=+0.585986823 container remove 8d9795167d45fe3b50d3b9694285607dd7cc9fb87b61d23c262233f164e3dea9 (image=quay.io/ceph/ceph:v20, name=dazzling_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:21 compute-0 systemd[1]: libpod-conmon-8d9795167d45fe3b50d3b9694285607dd7cc9fb87b61d23c262233f164e3dea9.scope: Deactivated successfully.
Feb 01 14:50:21 compute-0 ceph-mgr[75469]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 01 14:50:21 compute-0 podman[76348]: 2026-02-01 14:50:21.725522255 +0000 UTC m=+0.043389719 container create a186e4ac1739a1a1b695c4e038c21eeee152f68a7e98aa931b814ee6484f175b (image=quay.io/ceph/ceph:v20, name=frosty_meitner, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 01 14:50:21 compute-0 systemd[1]: Started libpod-conmon-a186e4ac1739a1a1b695c4e038c21eeee152f68a7e98aa931b814ee6484f175b.scope.
Feb 01 14:50:21 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f5cd8dddc92a006f20f3d2ac0215792621af1e83f868af784d599d275859d5/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f5cd8dddc92a006f20f3d2ac0215792621af1e83f868af784d599d275859d5/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f5cd8dddc92a006f20f3d2ac0215792621af1e83f868af784d599d275859d5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f5cd8dddc92a006f20f3d2ac0215792621af1e83f868af784d599d275859d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f5cd8dddc92a006f20f3d2ac0215792621af1e83f868af784d599d275859d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:21 compute-0 podman[76348]: 2026-02-01 14:50:21.797599434 +0000 UTC m=+0.115466908 container init a186e4ac1739a1a1b695c4e038c21eeee152f68a7e98aa931b814ee6484f175b (image=quay.io/ceph/ceph:v20, name=frosty_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:50:21 compute-0 podman[76348]: 2026-02-01 14:50:21.701441564 +0000 UTC m=+0.019309078 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:21 compute-0 podman[76348]: 2026-02-01 14:50:21.812464985 +0000 UTC m=+0.130332439 container start a186e4ac1739a1a1b695c4e038c21eeee152f68a7e98aa931b814ee6484f175b (image=quay.io/ceph/ceph:v20, name=frosty_meitner, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 01 14:50:21 compute-0 podman[76348]: 2026-02-01 14:50:21.816585722 +0000 UTC m=+0.134453236 container attach a186e4ac1739a1a1b695c4e038c21eeee152f68a7e98aa931b814ee6484f175b (image=quay.io/ceph/ceph:v20, name=frosty_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:50:22 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Feb 01 14:50:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:22 compute-0 ceph-mgr[75469]: [cephadm INFO root] Set ssh ssh_identity_key
Feb 01 14:50:22 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Feb 01 14:50:22 compute-0 ceph-mgr[75469]: [cephadm INFO root] Set ssh private key
Feb 01 14:50:22 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Set ssh private key
Feb 01 14:50:22 compute-0 systemd[1]: libpod-a186e4ac1739a1a1b695c4e038c21eeee152f68a7e98aa931b814ee6484f175b.scope: Deactivated successfully.
Feb 01 14:50:22 compute-0 podman[76348]: 2026-02-01 14:50:22.281844407 +0000 UTC m=+0.599711841 container died a186e4ac1739a1a1b695c4e038c21eeee152f68a7e98aa931b814ee6484f175b (image=quay.io/ceph/ceph:v20, name=frosty_meitner, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 01 14:50:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-83f5cd8dddc92a006f20f3d2ac0215792621af1e83f868af784d599d275859d5-merged.mount: Deactivated successfully.
Feb 01 14:50:22 compute-0 podman[76348]: 2026-02-01 14:50:22.319281957 +0000 UTC m=+0.637149411 container remove a186e4ac1739a1a1b695c4e038c21eeee152f68a7e98aa931b814ee6484f175b (image=quay.io/ceph/ceph:v20, name=frosty_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 01 14:50:22 compute-0 systemd[1]: libpod-conmon-a186e4ac1739a1a1b695c4e038c21eeee152f68a7e98aa931b814ee6484f175b.scope: Deactivated successfully.
Feb 01 14:50:22 compute-0 podman[76403]: 2026-02-01 14:50:22.38477036 +0000 UTC m=+0.047014222 container create 031fe38f43e2f49239c52f97dac9a439dcffba2158825727a6fe0d3028f73614 (image=quay.io/ceph/ceph:v20, name=heuristic_aryabhata, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:50:22 compute-0 systemd[1]: Started libpod-conmon-031fe38f43e2f49239c52f97dac9a439dcffba2158825727a6fe0d3028f73614.scope.
Feb 01 14:50:22 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:50:22 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650ba55452133fba3ad63b2a95e67a601583e502215390085930fe7bf4fa127e/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650ba55452133fba3ad63b2a95e67a601583e502215390085930fe7bf4fa127e/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650ba55452133fba3ad63b2a95e67a601583e502215390085930fe7bf4fa127e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650ba55452133fba3ad63b2a95e67a601583e502215390085930fe7bf4fa127e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650ba55452133fba3ad63b2a95e67a601583e502215390085930fe7bf4fa127e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:22 compute-0 podman[76403]: 2026-02-01 14:50:22.368627473 +0000 UTC m=+0.030871365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:22 compute-0 podman[76403]: 2026-02-01 14:50:22.463662612 +0000 UTC m=+0.125906594 container init 031fe38f43e2f49239c52f97dac9a439dcffba2158825727a6fe0d3028f73614 (image=quay.io/ceph/ceph:v20, name=heuristic_aryabhata, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 01 14:50:22 compute-0 podman[76403]: 2026-02-01 14:50:22.477795062 +0000 UTC m=+0.140038954 container start 031fe38f43e2f49239c52f97dac9a439dcffba2158825727a6fe0d3028f73614 (image=quay.io/ceph/ceph:v20, name=heuristic_aryabhata, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 01 14:50:22 compute-0 podman[76403]: 2026-02-01 14:50:22.481887978 +0000 UTC m=+0.144131940 container attach 031fe38f43e2f49239c52f97dac9a439dcffba2158825727a6fe0d3028f73614 (image=quay.io/ceph/ceph:v20, name=heuristic_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 01 14:50:22 compute-0 ceph-mon[75179]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:22 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:22 compute-0 ceph-mon[75179]: Set ssh ssh_user
Feb 01 14:50:22 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:22 compute-0 ceph-mon[75179]: Set ssh ssh_config
Feb 01 14:50:22 compute-0 ceph-mon[75179]: ssh user set to ceph-admin. sudo will be used
Feb 01 14:50:22 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:22 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Feb 01 14:50:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:22 compute-0 ceph-mgr[75469]: [cephadm INFO root] Set ssh ssh_identity_pub
Feb 01 14:50:22 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Feb 01 14:50:22 compute-0 systemd[1]: libpod-031fe38f43e2f49239c52f97dac9a439dcffba2158825727a6fe0d3028f73614.scope: Deactivated successfully.
Feb 01 14:50:22 compute-0 podman[76403]: 2026-02-01 14:50:22.913043388 +0000 UTC m=+0.575287280 container died 031fe38f43e2f49239c52f97dac9a439dcffba2158825727a6fe0d3028f73614 (image=quay.io/ceph/ceph:v20, name=heuristic_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:50:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-650ba55452133fba3ad63b2a95e67a601583e502215390085930fe7bf4fa127e-merged.mount: Deactivated successfully.
Feb 01 14:50:22 compute-0 podman[76403]: 2026-02-01 14:50:22.957081314 +0000 UTC m=+0.619325206 container remove 031fe38f43e2f49239c52f97dac9a439dcffba2158825727a6fe0d3028f73614 (image=quay.io/ceph/ceph:v20, name=heuristic_aryabhata, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 01 14:50:22 compute-0 systemd[1]: libpod-conmon-031fe38f43e2f49239c52f97dac9a439dcffba2158825727a6fe0d3028f73614.scope: Deactivated successfully.
Feb 01 14:50:23 compute-0 podman[76458]: 2026-02-01 14:50:23.033710833 +0000 UTC m=+0.055704077 container create 3abd7d18e4351f6246ff1704483d6d31f973bd965edd7384dbcd8122fda1b8c7 (image=quay.io/ceph/ceph:v20, name=blissful_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:23 compute-0 systemd[1]: Started libpod-conmon-3abd7d18e4351f6246ff1704483d6d31f973bd965edd7384dbcd8122fda1b8c7.scope.
Feb 01 14:50:23 compute-0 podman[76458]: 2026-02-01 14:50:23.009768695 +0000 UTC m=+0.031761999 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:23 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0803cb24b16fcee0557fbb20e0d60b9f9c6f825c2fffaf9c74dfcbcd1e27bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0803cb24b16fcee0557fbb20e0d60b9f9c6f825c2fffaf9c74dfcbcd1e27bf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0803cb24b16fcee0557fbb20e0d60b9f9c6f825c2fffaf9c74dfcbcd1e27bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:23 compute-0 podman[76458]: 2026-02-01 14:50:23.133746644 +0000 UTC m=+0.155739968 container init 3abd7d18e4351f6246ff1704483d6d31f973bd965edd7384dbcd8122fda1b8c7 (image=quay.io/ceph/ceph:v20, name=blissful_ishizaka, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:50:23 compute-0 podman[76458]: 2026-02-01 14:50:23.140849975 +0000 UTC m=+0.162843219 container start 3abd7d18e4351f6246ff1704483d6d31f973bd965edd7384dbcd8122fda1b8c7 (image=quay.io/ceph/ceph:v20, name=blissful_ishizaka, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:50:23 compute-0 podman[76458]: 2026-02-01 14:50:23.145214318 +0000 UTC m=+0.167207632 container attach 3abd7d18e4351f6246ff1704483d6d31f973bd965edd7384dbcd8122fda1b8c7 (image=quay.io/ceph/ceph:v20, name=blissful_ishizaka, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 01 14:50:23 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:23 compute-0 blissful_ishizaka[76474]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuc62woYQ6HfDdFdKxH9p2YvJ2Cu5z79VhJzSOBo96c05tD8Q91qYPpnXfDIEo83mJltB9P6bcxmVNw1QVUUGbTbW0drCaQkf+KnajOtuJ1H+96zTyvUYiCNXUxdYQ4vrlju8lrI5XjvOA066ddPwBuJ8t12jQk26l6X0LfCUirqvXIiXcpVvBNUkxDLulQwGUy2yIkNBevRvbJskFNHqcEy4sOkLBDYXSaPVtrmzuNRDBdqm6U6xfWmHQXiF4gVuOKNRms/+KUhCUY/dDWHj1jIJVmrTMVZhEQZgyhAXbb4JDMK9/NMCalRhh3f6UlBxmcQgSsNmGk+UgD+w0jbODdYMec0vOXZOYRnClALtuxqNe/enT9GyKc314/xWjLRumtOqPjjz+NtYPr7tAZVAlPENDlLhvzKVycefF4CPAvaPcqTcMWtfXYgGqcOQj4vwWaRndS9s95sQPLaIeJ8i+ZMggfF+tMpw9Zm0boto6XPwjw4ZWXu9etZ2GDbMSfAE= zuul@controller
Feb 01 14:50:23 compute-0 systemd[1]: libpod-3abd7d18e4351f6246ff1704483d6d31f973bd965edd7384dbcd8122fda1b8c7.scope: Deactivated successfully.
Feb 01 14:50:23 compute-0 podman[76458]: 2026-02-01 14:50:23.600730837 +0000 UTC m=+0.622724091 container died 3abd7d18e4351f6246ff1704483d6d31f973bd965edd7384dbcd8122fda1b8c7 (image=quay.io/ceph/ceph:v20, name=blissful_ishizaka, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 01 14:50:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c0803cb24b16fcee0557fbb20e0d60b9f9c6f825c2fffaf9c74dfcbcd1e27bf-merged.mount: Deactivated successfully.
Feb 01 14:50:23 compute-0 podman[76458]: 2026-02-01 14:50:23.646559744 +0000 UTC m=+0.668552968 container remove 3abd7d18e4351f6246ff1704483d6d31f973bd965edd7384dbcd8122fda1b8c7 (image=quay.io/ceph/ceph:v20, name=blissful_ishizaka, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Feb 01 14:50:23 compute-0 systemd[1]: libpod-conmon-3abd7d18e4351f6246ff1704483d6d31f973bd965edd7384dbcd8122fda1b8c7.scope: Deactivated successfully.
Feb 01 14:50:23 compute-0 ceph-mgr[75469]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 01 14:50:23 compute-0 podman[76512]: 2026-02-01 14:50:23.725138397 +0000 UTC m=+0.055652096 container create 57da267309921d7d730aa12089dde086daaaf6a2472d44628697ad984d2dd80d (image=quay.io/ceph/ceph:v20, name=loving_pare, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 01 14:50:23 compute-0 systemd[1]: Started libpod-conmon-57da267309921d7d730aa12089dde086daaaf6a2472d44628697ad984d2dd80d.scope.
Feb 01 14:50:23 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/031aba11ae3a9303845c3e2b99857adf84aa17e773386a5ff1935bb56fcf95ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/031aba11ae3a9303845c3e2b99857adf84aa17e773386a5ff1935bb56fcf95ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/031aba11ae3a9303845c3e2b99857adf84aa17e773386a5ff1935bb56fcf95ac/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:23 compute-0 podman[76512]: 2026-02-01 14:50:23.700547291 +0000 UTC m=+0.031061040 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:23 compute-0 podman[76512]: 2026-02-01 14:50:23.804065671 +0000 UTC m=+0.134579350 container init 57da267309921d7d730aa12089dde086daaaf6a2472d44628697ad984d2dd80d (image=quay.io/ceph/ceph:v20, name=loving_pare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:50:23 compute-0 podman[76512]: 2026-02-01 14:50:23.811082149 +0000 UTC m=+0.141595808 container start 57da267309921d7d730aa12089dde086daaaf6a2472d44628697ad984d2dd80d (image=quay.io/ceph/ceph:v20, name=loving_pare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:50:23 compute-0 podman[76512]: 2026-02-01 14:50:23.81464527 +0000 UTC m=+0.145158959 container attach 57da267309921d7d730aa12089dde086daaaf6a2472d44628697ad984d2dd80d (image=quay.io/ceph/ceph:v20, name=loving_pare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 01 14:50:23 compute-0 ceph-mon[75179]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:23 compute-0 ceph-mon[75179]: Set ssh ssh_identity_key
Feb 01 14:50:23 compute-0 ceph-mon[75179]: Set ssh private key
Feb 01 14:50:23 compute-0 ceph-mon[75179]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:23 compute-0 ceph-mon[75179]: Set ssh ssh_identity_pub
Feb 01 14:50:24 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:24 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:50:24 compute-0 sshd-session[76554]: Accepted publickey for ceph-admin from 192.168.122.100 port 48194 ssh2: RSA SHA256:bmFcrL+FkRxi0Y8nv16OtHztKzEgseijvyIvMlraUdY
Feb 01 14:50:24 compute-0 systemd-logind[786]: New session 20 of user ceph-admin.
Feb 01 14:50:24 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Feb 01 14:50:24 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Feb 01 14:50:24 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Feb 01 14:50:24 compute-0 systemd[1]: Starting User Manager for UID 42477...
Feb 01 14:50:24 compute-0 systemd[76558]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 01 14:50:24 compute-0 systemd[76558]: Queued start job for default target Main User Target.
Feb 01 14:50:24 compute-0 systemd[76558]: Created slice User Application Slice.
Feb 01 14:50:24 compute-0 systemd[76558]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 01 14:50:24 compute-0 systemd[76558]: Started Daily Cleanup of User's Temporary Directories.
Feb 01 14:50:24 compute-0 systemd[76558]: Reached target Paths.
Feb 01 14:50:24 compute-0 systemd[76558]: Reached target Timers.
Feb 01 14:50:24 compute-0 systemd[76558]: Starting D-Bus User Message Bus Socket...
Feb 01 14:50:24 compute-0 systemd[76558]: Starting Create User's Volatile Files and Directories...
Feb 01 14:50:24 compute-0 sshd-session[76571]: Accepted publickey for ceph-admin from 192.168.122.100 port 48200 ssh2: RSA SHA256:bmFcrL+FkRxi0Y8nv16OtHztKzEgseijvyIvMlraUdY
Feb 01 14:50:24 compute-0 systemd[76558]: Listening on D-Bus User Message Bus Socket.
Feb 01 14:50:24 compute-0 systemd[76558]: Reached target Sockets.
Feb 01 14:50:24 compute-0 systemd-logind[786]: New session 22 of user ceph-admin.
Feb 01 14:50:24 compute-0 systemd[76558]: Finished Create User's Volatile Files and Directories.
Feb 01 14:50:24 compute-0 systemd[76558]: Reached target Basic System.
Feb 01 14:50:24 compute-0 systemd[76558]: Reached target Main User Target.
Feb 01 14:50:24 compute-0 systemd[76558]: Startup finished in 143ms.
Feb 01 14:50:24 compute-0 systemd[1]: Started User Manager for UID 42477.
Feb 01 14:50:24 compute-0 systemd[1]: Started Session 20 of User ceph-admin.
Feb 01 14:50:24 compute-0 systemd[1]: Started Session 22 of User ceph-admin.
Feb 01 14:50:24 compute-0 sshd-session[76554]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 01 14:50:24 compute-0 sshd-session[76571]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 01 14:50:24 compute-0 sudo[76578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:50:24 compute-0 sudo[76578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:24 compute-0 sudo[76578]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:24 compute-0 ceph-mon[75179]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:25 compute-0 sshd-session[76603]: Accepted publickey for ceph-admin from 192.168.122.100 port 48210 ssh2: RSA SHA256:bmFcrL+FkRxi0Y8nv16OtHztKzEgseijvyIvMlraUdY
Feb 01 14:50:25 compute-0 systemd-logind[786]: New session 23 of user ceph-admin.
Feb 01 14:50:25 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Feb 01 14:50:25 compute-0 sshd-session[76603]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 01 14:50:25 compute-0 sudo[76607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Feb 01 14:50:25 compute-0 sudo[76607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:25 compute-0 sudo[76607]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:25 compute-0 sshd-session[76632]: Accepted publickey for ceph-admin from 192.168.122.100 port 48224 ssh2: RSA SHA256:bmFcrL+FkRxi0Y8nv16OtHztKzEgseijvyIvMlraUdY
Feb 01 14:50:25 compute-0 systemd-logind[786]: New session 24 of user ceph-admin.
Feb 01 14:50:25 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Feb 01 14:50:25 compute-0 sshd-session[76632]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 01 14:50:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052558 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:50:25 compute-0 sudo[76636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Feb 01 14:50:25 compute-0 sudo[76636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:25 compute-0 sudo[76636]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:25 compute-0 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Feb 01 14:50:25 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Feb 01 14:50:25 compute-0 sshd-session[76661]: Accepted publickey for ceph-admin from 192.168.122.100 port 48236 ssh2: RSA SHA256:bmFcrL+FkRxi0Y8nv16OtHztKzEgseijvyIvMlraUdY
Feb 01 14:50:25 compute-0 systemd-logind[786]: New session 25 of user ceph-admin.
Feb 01 14:50:25 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Feb 01 14:50:25 compute-0 sshd-session[76661]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 01 14:50:25 compute-0 ceph-mgr[75469]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 01 14:50:25 compute-0 sudo[76665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:50:25 compute-0 sudo[76665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:25 compute-0 sudo[76665]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:25 compute-0 ceph-mon[75179]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:25 compute-0 sshd-session[76690]: Accepted publickey for ceph-admin from 192.168.122.100 port 48238 ssh2: RSA SHA256:bmFcrL+FkRxi0Y8nv16OtHztKzEgseijvyIvMlraUdY
Feb 01 14:50:25 compute-0 systemd-logind[786]: New session 26 of user ceph-admin.
Feb 01 14:50:26 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Feb 01 14:50:26 compute-0 sshd-session[76690]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 01 14:50:26 compute-0 sudo[76694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:50:26 compute-0 sudo[76694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:26 compute-0 sudo[76694]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:26 compute-0 sshd-session[76719]: Accepted publickey for ceph-admin from 192.168.122.100 port 48246 ssh2: RSA SHA256:bmFcrL+FkRxi0Y8nv16OtHztKzEgseijvyIvMlraUdY
Feb 01 14:50:26 compute-0 systemd-logind[786]: New session 27 of user ceph-admin.
Feb 01 14:50:26 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Feb 01 14:50:26 compute-0 sshd-session[76719]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 01 14:50:26 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:50:26 compute-0 sudo[76723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Feb 01 14:50:26 compute-0 sudo[76723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:26 compute-0 sudo[76723]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:26 compute-0 sshd-session[76748]: Accepted publickey for ceph-admin from 192.168.122.100 port 48258 ssh2: RSA SHA256:bmFcrL+FkRxi0Y8nv16OtHztKzEgseijvyIvMlraUdY
Feb 01 14:50:26 compute-0 systemd-logind[786]: New session 28 of user ceph-admin.
Feb 01 14:50:26 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Feb 01 14:50:26 compute-0 sshd-session[76748]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 01 14:50:26 compute-0 sudo[76752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:50:26 compute-0 sudo[76752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:26 compute-0 sudo[76752]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:26 compute-0 ceph-mon[75179]: Deploying cephadm binary to compute-0
Feb 01 14:50:27 compute-0 sshd-session[76777]: Accepted publickey for ceph-admin from 192.168.122.100 port 48266 ssh2: RSA SHA256:bmFcrL+FkRxi0Y8nv16OtHztKzEgseijvyIvMlraUdY
Feb 01 14:50:27 compute-0 systemd-logind[786]: New session 29 of user ceph-admin.
Feb 01 14:50:27 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Feb 01 14:50:27 compute-0 sshd-session[76777]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 01 14:50:27 compute-0 sudo[76781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Feb 01 14:50:27 compute-0 sudo[76781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:27 compute-0 sudo[76781]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:27 compute-0 sshd-session[76806]: Accepted publickey for ceph-admin from 192.168.122.100 port 48282 ssh2: RSA SHA256:bmFcrL+FkRxi0Y8nv16OtHztKzEgseijvyIvMlraUdY
Feb 01 14:50:27 compute-0 systemd-logind[786]: New session 30 of user ceph-admin.
Feb 01 14:50:27 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Feb 01 14:50:27 compute-0 sshd-session[76806]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 01 14:50:27 compute-0 ceph-mgr[75469]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 01 14:50:28 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:50:28 compute-0 sshd-session[76833]: Accepted publickey for ceph-admin from 192.168.122.100 port 48298 ssh2: RSA SHA256:bmFcrL+FkRxi0Y8nv16OtHztKzEgseijvyIvMlraUdY
Feb 01 14:50:28 compute-0 systemd-logind[786]: New session 31 of user ceph-admin.
Feb 01 14:50:28 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Feb 01 14:50:28 compute-0 sshd-session[76833]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 01 14:50:28 compute-0 sudo[76837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Feb 01 14:50:28 compute-0 sudo[76837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:28 compute-0 sudo[76837]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:29 compute-0 sshd-session[76862]: Accepted publickey for ceph-admin from 192.168.122.100 port 48312 ssh2: RSA SHA256:bmFcrL+FkRxi0Y8nv16OtHztKzEgseijvyIvMlraUdY
Feb 01 14:50:29 compute-0 systemd-logind[786]: New session 32 of user ceph-admin.
Feb 01 14:50:29 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Feb 01 14:50:29 compute-0 sshd-session[76862]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Feb 01 14:50:29 compute-0 sudo[76866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Feb 01 14:50:29 compute-0 sudo[76866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:29 compute-0 sudo[76866]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:29 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 01 14:50:29 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:29 compute-0 ceph-mgr[75469]: [cephadm INFO root] Added host compute-0
Feb 01 14:50:29 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Added host compute-0
Feb 01 14:50:29 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 01 14:50:29 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 01 14:50:29 compute-0 loving_pare[76528]: Added host 'compute-0' with addr '192.168.122.100'
Feb 01 14:50:29 compute-0 systemd[1]: libpod-57da267309921d7d730aa12089dde086daaaf6a2472d44628697ad984d2dd80d.scope: Deactivated successfully.
Feb 01 14:50:29 compute-0 podman[76512]: 2026-02-01 14:50:29.560028057 +0000 UTC m=+5.890541736 container died 57da267309921d7d730aa12089dde086daaaf6a2472d44628697ad984d2dd80d (image=quay.io/ceph/ceph:v20, name=loving_pare, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:50:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-031aba11ae3a9303845c3e2b99857adf84aa17e773386a5ff1935bb56fcf95ac-merged.mount: Deactivated successfully.
Feb 01 14:50:29 compute-0 sudo[76912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:50:29 compute-0 sudo[76912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:29 compute-0 sudo[76912]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:29 compute-0 podman[76512]: 2026-02-01 14:50:29.617805952 +0000 UTC m=+5.948319641 container remove 57da267309921d7d730aa12089dde086daaaf6a2472d44628697ad984d2dd80d (image=quay.io/ceph/ceph:v20, name=loving_pare, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 01 14:50:29 compute-0 systemd[1]: libpod-conmon-57da267309921d7d730aa12089dde086daaaf6a2472d44628697ad984d2dd80d.scope: Deactivated successfully.
Feb 01 14:50:29 compute-0 sudo[76948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 pull
Feb 01 14:50:29 compute-0 sudo[76948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:29 compute-0 ceph-mgr[75469]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 01 14:50:29 compute-0 podman[76953]: 2026-02-01 14:50:29.701705876 +0000 UTC m=+0.056129409 container create 495357593ed9495af3fcefccdecabf81b894d6cd9398579426604d670425e7a7 (image=quay.io/ceph/ceph:v20, name=flamboyant_feistel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 01 14:50:29 compute-0 systemd[1]: Started libpod-conmon-495357593ed9495af3fcefccdecabf81b894d6cd9398579426604d670425e7a7.scope.
Feb 01 14:50:29 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e178b1d42057af89d9199777cc210258582f2282ee4040de16c952b5d1c29a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e178b1d42057af89d9199777cc210258582f2282ee4040de16c952b5d1c29a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e178b1d42057af89d9199777cc210258582f2282ee4040de16c952b5d1c29a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:29 compute-0 podman[76953]: 2026-02-01 14:50:29.681039701 +0000 UTC m=+0.035463234 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:29 compute-0 podman[76953]: 2026-02-01 14:50:29.790067316 +0000 UTC m=+0.144490889 container init 495357593ed9495af3fcefccdecabf81b894d6cd9398579426604d670425e7a7 (image=quay.io/ceph/ceph:v20, name=flamboyant_feistel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 01 14:50:29 compute-0 podman[76953]: 2026-02-01 14:50:29.796999573 +0000 UTC m=+0.151423136 container start 495357593ed9495af3fcefccdecabf81b894d6cd9398579426604d670425e7a7 (image=quay.io/ceph/ceph:v20, name=flamboyant_feistel, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:50:29 compute-0 podman[76953]: 2026-02-01 14:50:29.801406437 +0000 UTC m=+0.155830040 container attach 495357593ed9495af3fcefccdecabf81b894d6cd9398579426604d670425e7a7 (image=quay.io/ceph/ceph:v20, name=flamboyant_feistel, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 01 14:50:30 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:30 compute-0 ceph-mgr[75469]: [cephadm INFO root] Saving service mon spec with placement count:5
Feb 01 14:50:30 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Feb 01 14:50:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb 01 14:50:30 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:30 compute-0 flamboyant_feistel[76989]: Scheduled mon update...
Feb 01 14:50:30 compute-0 systemd[1]: libpod-495357593ed9495af3fcefccdecabf81b894d6cd9398579426604d670425e7a7.scope: Deactivated successfully.
Feb 01 14:50:30 compute-0 podman[76953]: 2026-02-01 14:50:30.222890594 +0000 UTC m=+0.577314147 container died 495357593ed9495af3fcefccdecabf81b894d6cd9398579426604d670425e7a7 (image=quay.io/ceph/ceph:v20, name=flamboyant_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:50:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-96e178b1d42057af89d9199777cc210258582f2282ee4040de16c952b5d1c29a-merged.mount: Deactivated successfully.
Feb 01 14:50:30 compute-0 podman[76953]: 2026-02-01 14:50:30.264347957 +0000 UTC m=+0.618771470 container remove 495357593ed9495af3fcefccdecabf81b894d6cd9398579426604d670425e7a7 (image=quay.io/ceph/ceph:v20, name=flamboyant_feistel, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:50:30 compute-0 systemd[1]: libpod-conmon-495357593ed9495af3fcefccdecabf81b894d6cd9398579426604d670425e7a7.scope: Deactivated successfully.
Feb 01 14:50:30 compute-0 podman[77051]: 2026-02-01 14:50:30.320730403 +0000 UTC m=+0.044533221 container create 60786c9b2a3c7423584e88e1d235ff777b7c4bb7496de4f6911401a0afebfedc (image=quay.io/ceph/ceph:v20, name=admiring_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 01 14:50:30 compute-0 systemd[1]: Started libpod-conmon-60786c9b2a3c7423584e88e1d235ff777b7c4bb7496de4f6911401a0afebfedc.scope.
Feb 01 14:50:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054701 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:50:30 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:30 compute-0 podman[77051]: 2026-02-01 14:50:30.295420836 +0000 UTC m=+0.019223724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7baf343193c4b14e30fa283061e316a0f648c48cfeaaacb3d112c974835176a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7baf343193c4b14e30fa283061e316a0f648c48cfeaaacb3d112c974835176a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7baf343193c4b14e30fa283061e316a0f648c48cfeaaacb3d112c974835176a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:30 compute-0 podman[77023]: 2026-02-01 14:50:30.399187153 +0000 UTC m=+0.446550057 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:30 compute-0 podman[77051]: 2026-02-01 14:50:30.411211163 +0000 UTC m=+0.135014021 container init 60786c9b2a3c7423584e88e1d235ff777b7c4bb7496de4f6911401a0afebfedc (image=quay.io/ceph/ceph:v20, name=admiring_northcutt, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Feb 01 14:50:30 compute-0 podman[77051]: 2026-02-01 14:50:30.416284777 +0000 UTC m=+0.140087595 container start 60786c9b2a3c7423584e88e1d235ff777b7c4bb7496de4f6911401a0afebfedc (image=quay.io/ceph/ceph:v20, name=admiring_northcutt, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 01 14:50:30 compute-0 podman[77051]: 2026-02-01 14:50:30.419676163 +0000 UTC m=+0.143479021 container attach 60786c9b2a3c7423584e88e1d235ff777b7c4bb7496de4f6911401a0afebfedc (image=quay.io/ceph/ceph:v20, name=admiring_northcutt, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 01 14:50:30 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:50:30 compute-0 podman[77087]: 2026-02-01 14:50:30.510178284 +0000 UTC m=+0.042535335 container create 5df9638e113c35ce6bdec8fccbecd03d50ac4d3a5f99bfafdccc2a0b81ca72a7 (image=quay.io/ceph/ceph:v20, name=ecstatic_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:50:30 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:30 compute-0 ceph-mon[75179]: Added host compute-0
Feb 01 14:50:30 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 01 14:50:30 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:30 compute-0 systemd[1]: Started libpod-conmon-5df9638e113c35ce6bdec8fccbecd03d50ac4d3a5f99bfafdccc2a0b81ca72a7.scope.
Feb 01 14:50:30 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:30 compute-0 podman[77087]: 2026-02-01 14:50:30.573418603 +0000 UTC m=+0.105775674 container init 5df9638e113c35ce6bdec8fccbecd03d50ac4d3a5f99bfafdccc2a0b81ca72a7 (image=quay.io/ceph/ceph:v20, name=ecstatic_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:50:30 compute-0 podman[77087]: 2026-02-01 14:50:30.580842893 +0000 UTC m=+0.113199934 container start 5df9638e113c35ce6bdec8fccbecd03d50ac4d3a5f99bfafdccc2a0b81ca72a7 (image=quay.io/ceph/ceph:v20, name=ecstatic_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:50:30 compute-0 podman[77087]: 2026-02-01 14:50:30.585149335 +0000 UTC m=+0.117506426 container attach 5df9638e113c35ce6bdec8fccbecd03d50ac4d3a5f99bfafdccc2a0b81ca72a7 (image=quay.io/ceph/ceph:v20, name=ecstatic_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 01 14:50:30 compute-0 podman[77087]: 2026-02-01 14:50:30.489974662 +0000 UTC m=+0.022331723 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:30 compute-0 ecstatic_payne[77105]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Feb 01 14:50:30 compute-0 systemd[1]: libpod-5df9638e113c35ce6bdec8fccbecd03d50ac4d3a5f99bfafdccc2a0b81ca72a7.scope: Deactivated successfully.
Feb 01 14:50:30 compute-0 podman[77087]: 2026-02-01 14:50:30.69415299 +0000 UTC m=+0.226510051 container died 5df9638e113c35ce6bdec8fccbecd03d50ac4d3a5f99bfafdccc2a0b81ca72a7 (image=quay.io/ceph/ceph:v20, name=ecstatic_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Feb 01 14:50:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-c30dc1dfd370a4f924baadeade66151a1b606eaa1a4441ce8b832f5eb7d46146-merged.mount: Deactivated successfully.
Feb 01 14:50:30 compute-0 podman[77087]: 2026-02-01 14:50:30.733335718 +0000 UTC m=+0.265692769 container remove 5df9638e113c35ce6bdec8fccbecd03d50ac4d3a5f99bfafdccc2a0b81ca72a7 (image=quay.io/ceph/ceph:v20, name=ecstatic_payne, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:50:30 compute-0 systemd[1]: libpod-conmon-5df9638e113c35ce6bdec8fccbecd03d50ac4d3a5f99bfafdccc2a0b81ca72a7.scope: Deactivated successfully.
Feb 01 14:50:30 compute-0 sudo[76948]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Feb 01 14:50:30 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:30 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:30 compute-0 sudo[77139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:50:30 compute-0 ceph-mgr[75469]: [cephadm INFO root] Saving service mgr spec with placement count:2
Feb 01 14:50:30 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Feb 01 14:50:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 01 14:50:30 compute-0 sudo[77139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:30 compute-0 sudo[77139]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:30 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:30 compute-0 admiring_northcutt[77067]: Scheduled mgr update...
Feb 01 14:50:30 compute-0 systemd[1]: libpod-60786c9b2a3c7423584e88e1d235ff777b7c4bb7496de4f6911401a0afebfedc.scope: Deactivated successfully.
Feb 01 14:50:30 compute-0 podman[77051]: 2026-02-01 14:50:30.877572149 +0000 UTC m=+0.601374957 container died 60786c9b2a3c7423584e88e1d235ff777b7c4bb7496de4f6911401a0afebfedc (image=quay.io/ceph/ceph:v20, name=admiring_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 01 14:50:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7baf343193c4b14e30fa283061e316a0f648c48cfeaaacb3d112c974835176a-merged.mount: Deactivated successfully.
Feb 01 14:50:30 compute-0 podman[77051]: 2026-02-01 14:50:30.919035512 +0000 UTC m=+0.642838320 container remove 60786c9b2a3c7423584e88e1d235ff777b7c4bb7496de4f6911401a0afebfedc (image=quay.io/ceph/ceph:v20, name=admiring_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:30 compute-0 systemd[1]: libpod-conmon-60786c9b2a3c7423584e88e1d235ff777b7c4bb7496de4f6911401a0afebfedc.scope: Deactivated successfully.
Feb 01 14:50:30 compute-0 sudo[77166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 01 14:50:30 compute-0 sudo[77166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:30 compute-0 podman[77204]: 2026-02-01 14:50:30.970160659 +0000 UTC m=+0.037052630 container create 752699d58fe28b5c3aae20cb69e5e1efcdfe420f1aa64139050cfcd7f4f7711e (image=quay.io/ceph/ceph:v20, name=angry_haslett, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:50:31 compute-0 systemd[1]: Started libpod-conmon-752699d58fe28b5c3aae20cb69e5e1efcdfe420f1aa64139050cfcd7f4f7711e.scope.
Feb 01 14:50:31 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77cfb63ff9acb17ff38dd12adc5dbd9a275fab4bcd581323f74e2f788b0fa2d2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77cfb63ff9acb17ff38dd12adc5dbd9a275fab4bcd581323f74e2f788b0fa2d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77cfb63ff9acb17ff38dd12adc5dbd9a275fab4bcd581323f74e2f788b0fa2d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:31 compute-0 podman[77204]: 2026-02-01 14:50:30.951626974 +0000 UTC m=+0.018518975 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:31 compute-0 podman[77204]: 2026-02-01 14:50:31.050912324 +0000 UTC m=+0.117804335 container init 752699d58fe28b5c3aae20cb69e5e1efcdfe420f1aa64139050cfcd7f4f7711e (image=quay.io/ceph/ceph:v20, name=angry_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 01 14:50:31 compute-0 podman[77204]: 2026-02-01 14:50:31.056945795 +0000 UTC m=+0.123837756 container start 752699d58fe28b5c3aae20cb69e5e1efcdfe420f1aa64139050cfcd7f4f7711e (image=quay.io/ceph/ceph:v20, name=angry_haslett, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 01 14:50:31 compute-0 podman[77204]: 2026-02-01 14:50:31.060613008 +0000 UTC m=+0.127504999 container attach 752699d58fe28b5c3aae20cb69e5e1efcdfe420f1aa64139050cfcd7f4f7711e (image=quay.io/ceph/ceph:v20, name=angry_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 01 14:50:31 compute-0 sudo[77166]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:50:31 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:31 compute-0 sudo[77266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:50:31 compute-0 sudo[77266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:31 compute-0 sudo[77266]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:31 compute-0 sudo[77291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 01 14:50:31 compute-0 sudo[77291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:31 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:31 compute-0 ceph-mgr[75469]: [cephadm INFO root] Saving service crash spec with placement *
Feb 01 14:50:31 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Feb 01 14:50:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb 01 14:50:31 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:31 compute-0 angry_haslett[77221]: Scheduled crash update...
Feb 01 14:50:31 compute-0 systemd[1]: libpod-752699d58fe28b5c3aae20cb69e5e1efcdfe420f1aa64139050cfcd7f4f7711e.scope: Deactivated successfully.
Feb 01 14:50:31 compute-0 podman[77204]: 2026-02-01 14:50:31.52183806 +0000 UTC m=+0.588730021 container died 752699d58fe28b5c3aae20cb69e5e1efcdfe420f1aa64139050cfcd7f4f7711e (image=quay.io/ceph/ceph:v20, name=angry_haslett, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:50:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-77cfb63ff9acb17ff38dd12adc5dbd9a275fab4bcd581323f74e2f788b0fa2d2-merged.mount: Deactivated successfully.
Feb 01 14:50:31 compute-0 podman[77204]: 2026-02-01 14:50:31.559774753 +0000 UTC m=+0.626666714 container remove 752699d58fe28b5c3aae20cb69e5e1efcdfe420f1aa64139050cfcd7f4f7711e (image=quay.io/ceph/ceph:v20, name=angry_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 01 14:50:31 compute-0 systemd[1]: libpod-conmon-752699d58fe28b5c3aae20cb69e5e1efcdfe420f1aa64139050cfcd7f4f7711e.scope: Deactivated successfully.
Feb 01 14:50:31 compute-0 podman[77334]: 2026-02-01 14:50:31.620727678 +0000 UTC m=+0.046157017 container create 820f2d9892f858435fa2321ada9f0f22b898e14168134b9991ba4a6002ea28ae (image=quay.io/ceph/ceph:v20, name=priceless_brahmagupta, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:50:31 compute-0 systemd[1]: Started libpod-conmon-820f2d9892f858435fa2321ada9f0f22b898e14168134b9991ba4a6002ea28ae.scope.
Feb 01 14:50:31 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38774e566f468a52bb2b8ad7b325a0e42f19aab46afdac1daa317eb42fc2487/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38774e566f468a52bb2b8ad7b325a0e42f19aab46afdac1daa317eb42fc2487/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38774e566f468a52bb2b8ad7b325a0e42f19aab46afdac1daa317eb42fc2487/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:31 compute-0 ceph-mgr[75469]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 01 14:50:31 compute-0 podman[77334]: 2026-02-01 14:50:31.597006547 +0000 UTC m=+0.022435846 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:31 compute-0 podman[77334]: 2026-02-01 14:50:31.703247273 +0000 UTC m=+0.128676612 container init 820f2d9892f858435fa2321ada9f0f22b898e14168134b9991ba4a6002ea28ae (image=quay.io/ceph/ceph:v20, name=priceless_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 01 14:50:31 compute-0 podman[77334]: 2026-02-01 14:50:31.707791782 +0000 UTC m=+0.133221091 container start 820f2d9892f858435fa2321ada9f0f22b898e14168134b9991ba4a6002ea28ae (image=quay.io/ceph/ceph:v20, name=priceless_brahmagupta, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 01 14:50:31 compute-0 podman[77334]: 2026-02-01 14:50:31.711410444 +0000 UTC m=+0.136839763 container attach 820f2d9892f858435fa2321ada9f0f22b898e14168134b9991ba4a6002ea28ae (image=quay.io/ceph/ceph:v20, name=priceless_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 01 14:50:31 compute-0 podman[77389]: 2026-02-01 14:50:31.777276338 +0000 UTC m=+0.052319312 container exec 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 01 14:50:31 compute-0 ceph-mon[75179]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:31 compute-0 ceph-mon[75179]: Saving service mon spec with placement count:5
Feb 01 14:50:31 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:31 compute-0 ceph-mon[75179]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:31 compute-0 ceph-mon[75179]: Saving service mgr spec with placement count:2
Feb 01 14:50:31 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:31 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:31 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:31 compute-0 podman[77428]: 2026-02-01 14:50:31.94450596 +0000 UTC m=+0.049886433 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:50:31 compute-0 podman[77389]: 2026-02-01 14:50:31.951845968 +0000 UTC m=+0.226888892 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 01 14:50:32 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Feb 01 14:50:32 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/700280739' entity='client.admin' 
Feb 01 14:50:32 compute-0 systemd[1]: libpod-820f2d9892f858435fa2321ada9f0f22b898e14168134b9991ba4a6002ea28ae.scope: Deactivated successfully.
Feb 01 14:50:32 compute-0 podman[77334]: 2026-02-01 14:50:32.138014496 +0000 UTC m=+0.563443815 container died 820f2d9892f858435fa2321ada9f0f22b898e14168134b9991ba4a6002ea28ae (image=quay.io/ceph/ceph:v20, name=priceless_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 01 14:50:32 compute-0 sudo[77291]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f38774e566f468a52bb2b8ad7b325a0e42f19aab46afdac1daa317eb42fc2487-merged.mount: Deactivated successfully.
Feb 01 14:50:32 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:50:32 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:32 compute-0 podman[77334]: 2026-02-01 14:50:32.176911726 +0000 UTC m=+0.602341045 container remove 820f2d9892f858435fa2321ada9f0f22b898e14168134b9991ba4a6002ea28ae (image=quay.io/ceph/ceph:v20, name=priceless_brahmagupta, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 01 14:50:32 compute-0 systemd[1]: libpod-conmon-820f2d9892f858435fa2321ada9f0f22b898e14168134b9991ba4a6002ea28ae.scope: Deactivated successfully.
Feb 01 14:50:32 compute-0 sudo[77502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:50:32 compute-0 sudo[77502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:32 compute-0 sudo[77502]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:32 compute-0 podman[77505]: 2026-02-01 14:50:32.249969164 +0000 UTC m=+0.050577423 container create c70abfd62f3a5286172909a60458996845b65061ac27b89d151f18bfd2b4a644 (image=quay.io/ceph/ceph:v20, name=stoic_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:50:32 compute-0 sudo[77537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 14:50:32 compute-0 sudo[77537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:32 compute-0 systemd[1]: Started libpod-conmon-c70abfd62f3a5286172909a60458996845b65061ac27b89d151f18bfd2b4a644.scope.
Feb 01 14:50:32 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be332acf79f7712925ef8e6343537187dfbb27cde2f745a0f6c45eb7b8223ad6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be332acf79f7712925ef8e6343537187dfbb27cde2f745a0f6c45eb7b8223ad6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be332acf79f7712925ef8e6343537187dfbb27cde2f745a0f6c45eb7b8223ad6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:32 compute-0 podman[77505]: 2026-02-01 14:50:32.309260651 +0000 UTC m=+0.109868930 container init c70abfd62f3a5286172909a60458996845b65061ac27b89d151f18bfd2b4a644 (image=quay.io/ceph/ceph:v20, name=stoic_rosalind, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 01 14:50:32 compute-0 podman[77505]: 2026-02-01 14:50:32.314398797 +0000 UTC m=+0.115007076 container start c70abfd62f3a5286172909a60458996845b65061ac27b89d151f18bfd2b4a644 (image=quay.io/ceph/ceph:v20, name=stoic_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 01 14:50:32 compute-0 podman[77505]: 2026-02-01 14:50:32.317617908 +0000 UTC m=+0.118226187 container attach c70abfd62f3a5286172909a60458996845b65061ac27b89d151f18bfd2b4a644 (image=quay.io/ceph/ceph:v20, name=stoic_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:50:32 compute-0 podman[77505]: 2026-02-01 14:50:32.233618111 +0000 UTC m=+0.034226390 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:32 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:50:32 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77604 (sysctl)
Feb 01 14:50:32 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Feb 01 14:50:32 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Feb 01 14:50:32 compute-0 sudo[77537]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:32 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:32 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Feb 01 14:50:32 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:32 compute-0 systemd[1]: libpod-c70abfd62f3a5286172909a60458996845b65061ac27b89d151f18bfd2b4a644.scope: Deactivated successfully.
Feb 01 14:50:32 compute-0 podman[77505]: 2026-02-01 14:50:32.72974056 +0000 UTC m=+0.530348859 container died c70abfd62f3a5286172909a60458996845b65061ac27b89d151f18bfd2b4a644 (image=quay.io/ceph/ceph:v20, name=stoic_rosalind, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 01 14:50:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-be332acf79f7712925ef8e6343537187dfbb27cde2f745a0f6c45eb7b8223ad6-merged.mount: Deactivated successfully.
Feb 01 14:50:32 compute-0 podman[77505]: 2026-02-01 14:50:32.764712699 +0000 UTC m=+0.565320968 container remove c70abfd62f3a5286172909a60458996845b65061ac27b89d151f18bfd2b4a644 (image=quay.io/ceph/ceph:v20, name=stoic_rosalind, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 01 14:50:32 compute-0 systemd[1]: libpod-conmon-c70abfd62f3a5286172909a60458996845b65061ac27b89d151f18bfd2b4a644.scope: Deactivated successfully.
Feb 01 14:50:32 compute-0 ceph-mon[75179]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:32 compute-0 ceph-mon[75179]: Saving service crash spec with placement *
Feb 01 14:50:32 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/700280739' entity='client.admin' 
Feb 01 14:50:32 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:32 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:32 compute-0 sudo[77638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:50:32 compute-0 sudo[77638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:32 compute-0 sudo[77638]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:32 compute-0 podman[77643]: 2026-02-01 14:50:32.824628815 +0000 UTC m=+0.044021957 container create a2498803cb37a1f7532254c76bd82ea3ec9b901bd4e113301810ba7426073d01 (image=quay.io/ceph/ceph:v20, name=brave_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:50:32 compute-0 systemd[1]: Started libpod-conmon-a2498803cb37a1f7532254c76bd82ea3ec9b901bd4e113301810ba7426073d01.scope.
Feb 01 14:50:32 compute-0 sudo[77676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Feb 01 14:50:32 compute-0 sudo[77676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:32 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95cef01d1e5a484f58d803b47059f8b817e7f59e0d6fa5b9d088094fde6f292a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95cef01d1e5a484f58d803b47059f8b817e7f59e0d6fa5b9d088094fde6f292a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95cef01d1e5a484f58d803b47059f8b817e7f59e0d6fa5b9d088094fde6f292a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:32 compute-0 podman[77643]: 2026-02-01 14:50:32.890390436 +0000 UTC m=+0.109783628 container init a2498803cb37a1f7532254c76bd82ea3ec9b901bd4e113301810ba7426073d01 (image=quay.io/ceph/ceph:v20, name=brave_golick, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 01 14:50:32 compute-0 podman[77643]: 2026-02-01 14:50:32.894837931 +0000 UTC m=+0.114231083 container start a2498803cb37a1f7532254c76bd82ea3ec9b901bd4e113301810ba7426073d01 (image=quay.io/ceph/ceph:v20, name=brave_golick, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 01 14:50:32 compute-0 podman[77643]: 2026-02-01 14:50:32.898630959 +0000 UTC m=+0.118024151 container attach a2498803cb37a1f7532254c76bd82ea3ec9b901bd4e113301810ba7426073d01 (image=quay.io/ceph/ceph:v20, name=brave_golick, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:50:32 compute-0 podman[77643]: 2026-02-01 14:50:32.810768992 +0000 UTC m=+0.030162184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:33 compute-0 sudo[77676]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:33 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:50:33 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:33 compute-0 sudo[77746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:50:33 compute-0 sudo[77746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:33 compute-0 sudo[77746]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:33 compute-0 sudo[77771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- inventory --format=json-pretty --filter-for-batch
Feb 01 14:50:33 compute-0 sudo[77771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:33 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:33 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 01 14:50:33 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:33 compute-0 ceph-mgr[75469]: [cephadm INFO root] Added label _admin to host compute-0
Feb 01 14:50:33 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Feb 01 14:50:33 compute-0 brave_golick[77704]: Added label _admin to host compute-0
Feb 01 14:50:33 compute-0 systemd[1]: libpod-a2498803cb37a1f7532254c76bd82ea3ec9b901bd4e113301810ba7426073d01.scope: Deactivated successfully.
Feb 01 14:50:33 compute-0 podman[77643]: 2026-02-01 14:50:33.306173701 +0000 UTC m=+0.525566883 container died a2498803cb37a1f7532254c76bd82ea3ec9b901bd4e113301810ba7426073d01 (image=quay.io/ceph/ceph:v20, name=brave_golick, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:50:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-95cef01d1e5a484f58d803b47059f8b817e7f59e0d6fa5b9d088094fde6f292a-merged.mount: Deactivated successfully.
Feb 01 14:50:33 compute-0 podman[77643]: 2026-02-01 14:50:33.350207287 +0000 UTC m=+0.569600459 container remove a2498803cb37a1f7532254c76bd82ea3ec9b901bd4e113301810ba7426073d01 (image=quay.io/ceph/ceph:v20, name=brave_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 01 14:50:33 compute-0 systemd[1]: libpod-conmon-a2498803cb37a1f7532254c76bd82ea3ec9b901bd4e113301810ba7426073d01.scope: Deactivated successfully.
Feb 01 14:50:33 compute-0 podman[77822]: 2026-02-01 14:50:33.397968098 +0000 UTC m=+0.031617085 container create 41e5273fc1269f3914683f731a5fddbc6894ac4133f7237f56d341800bf7a119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:50:33 compute-0 podman[77820]: 2026-02-01 14:50:33.425086196 +0000 UTC m=+0.055745899 container create f66d834ae3ace385463fcda0fcbd14ef8a78e8f975ae971f169cb4b24b228099 (image=quay.io/ceph/ceph:v20, name=infallible_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:50:33 compute-0 systemd[1]: Started libpod-conmon-41e5273fc1269f3914683f731a5fddbc6894ac4133f7237f56d341800bf7a119.scope.
Feb 01 14:50:33 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:33 compute-0 systemd[1]: Started libpod-conmon-f66d834ae3ace385463fcda0fcbd14ef8a78e8f975ae971f169cb4b24b228099.scope.
Feb 01 14:50:33 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:33 compute-0 podman[77822]: 2026-02-01 14:50:33.467924628 +0000 UTC m=+0.101573665 container init 41e5273fc1269f3914683f731a5fddbc6894ac4133f7237f56d341800bf7a119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Feb 01 14:50:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba8a695c1a991e54d71b3a94c2024e023a03783f92fd927052f748756b7abd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba8a695c1a991e54d71b3a94c2024e023a03783f92fd927052f748756b7abd3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba8a695c1a991e54d71b3a94c2024e023a03783f92fd927052f748756b7abd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:33 compute-0 podman[77822]: 2026-02-01 14:50:33.47790619 +0000 UTC m=+0.111555217 container start 41e5273fc1269f3914683f731a5fddbc6894ac4133f7237f56d341800bf7a119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 01 14:50:33 compute-0 podman[77822]: 2026-02-01 14:50:33.382951404 +0000 UTC m=+0.016600421 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:50:33 compute-0 podman[77822]: 2026-02-01 14:50:33.481942835 +0000 UTC m=+0.115591822 container attach 41e5273fc1269f3914683f731a5fddbc6894ac4133f7237f56d341800bf7a119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:50:33 compute-0 zen_shannon[77851]: 167 167
Feb 01 14:50:33 compute-0 systemd[1]: libpod-41e5273fc1269f3914683f731a5fddbc6894ac4133f7237f56d341800bf7a119.scope: Deactivated successfully.
Feb 01 14:50:33 compute-0 podman[77820]: 2026-02-01 14:50:33.485747082 +0000 UTC m=+0.116406805 container init f66d834ae3ace385463fcda0fcbd14ef8a78e8f975ae971f169cb4b24b228099 (image=quay.io/ceph/ceph:v20, name=infallible_proskuriakova, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Feb 01 14:50:33 compute-0 podman[77822]: 2026-02-01 14:50:33.486809142 +0000 UTC m=+0.120458139 container died 41e5273fc1269f3914683f731a5fddbc6894ac4133f7237f56d341800bf7a119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Feb 01 14:50:33 compute-0 podman[77820]: 2026-02-01 14:50:33.492325358 +0000 UTC m=+0.122985061 container start f66d834ae3ace385463fcda0fcbd14ef8a78e8f975ae971f169cb4b24b228099 (image=quay.io/ceph/ceph:v20, name=infallible_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:50:33 compute-0 podman[77820]: 2026-02-01 14:50:33.49907964 +0000 UTC m=+0.129739363 container attach f66d834ae3ace385463fcda0fcbd14ef8a78e8f975ae971f169cb4b24b228099 (image=quay.io/ceph/ceph:v20, name=infallible_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Feb 01 14:50:33 compute-0 podman[77820]: 2026-02-01 14:50:33.405946214 +0000 UTC m=+0.036606017 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-77c3acc5c5e3b72f5d61b390302d65d8e9e100c973cd147ad426d77f19c37d8f-merged.mount: Deactivated successfully.
Feb 01 14:50:33 compute-0 podman[77822]: 2026-02-01 14:50:33.521872145 +0000 UTC m=+0.155521142 container remove 41e5273fc1269f3914683f731a5fddbc6894ac4133f7237f56d341800bf7a119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 01 14:50:33 compute-0 systemd[1]: libpod-conmon-41e5273fc1269f3914683f731a5fddbc6894ac4133f7237f56d341800bf7a119.scope: Deactivated successfully.
Feb 01 14:50:33 compute-0 ceph-mgr[75469]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 01 14:50:33 compute-0 ceph-mon[75179]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:33 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:33 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:34 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Feb 01 14:50:34 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/117713498' entity='client.admin' 
Feb 01 14:50:34 compute-0 infallible_proskuriakova[77857]: set mgr/dashboard/cluster/status
Feb 01 14:50:34 compute-0 systemd[1]: libpod-f66d834ae3ace385463fcda0fcbd14ef8a78e8f975ae971f169cb4b24b228099.scope: Deactivated successfully.
Feb 01 14:50:34 compute-0 podman[77820]: 2026-02-01 14:50:34.074808671 +0000 UTC m=+0.705468414 container died f66d834ae3ace385463fcda0fcbd14ef8a78e8f975ae971f169cb4b24b228099 (image=quay.io/ceph/ceph:v20, name=infallible_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 01 14:50:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ba8a695c1a991e54d71b3a94c2024e023a03783f92fd927052f748756b7abd3-merged.mount: Deactivated successfully.
Feb 01 14:50:34 compute-0 podman[77820]: 2026-02-01 14:50:34.1154061 +0000 UTC m=+0.746065813 container remove f66d834ae3ace385463fcda0fcbd14ef8a78e8f975ae971f169cb4b24b228099 (image=quay.io/ceph/ceph:v20, name=infallible_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 01 14:50:34 compute-0 systemd[1]: libpod-conmon-f66d834ae3ace385463fcda0fcbd14ef8a78e8f975ae971f169cb4b24b228099.scope: Deactivated successfully.
Feb 01 14:50:34 compute-0 systemd[1]: Reloading.
Feb 01 14:50:34 compute-0 systemd-rc-local-generator[77939]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:50:34 compute-0 systemd-sysv-generator[77944]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:50:34 compute-0 sudo[74130]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:34 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:50:34 compute-0 podman[77957]: 2026-02-01 14:50:34.531952536 +0000 UTC m=+0.044115729 container create a80920b2a9110cf68688a799bb4c9eef1db01cfc366b5e1de1ea3c00e7c8fd5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 01 14:50:34 compute-0 systemd[1]: Started libpod-conmon-a80920b2a9110cf68688a799bb4c9eef1db01cfc366b5e1de1ea3c00e7c8fd5d.scope.
Feb 01 14:50:34 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ded3bc62bc9074de5cf7dd65ca2a478401a3774b6962c10a9745b82c5d22660/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ded3bc62bc9074de5cf7dd65ca2a478401a3774b6962c10a9745b82c5d22660/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ded3bc62bc9074de5cf7dd65ca2a478401a3774b6962c10a9745b82c5d22660/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ded3bc62bc9074de5cf7dd65ca2a478401a3774b6962c10a9745b82c5d22660/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:34 compute-0 podman[77957]: 2026-02-01 14:50:34.509040408 +0000 UTC m=+0.021203651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:50:34 compute-0 podman[77957]: 2026-02-01 14:50:34.62007804 +0000 UTC m=+0.132241273 container init a80920b2a9110cf68688a799bb4c9eef1db01cfc366b5e1de1ea3c00e7c8fd5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_curran, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Feb 01 14:50:34 compute-0 podman[77957]: 2026-02-01 14:50:34.635052493 +0000 UTC m=+0.147215676 container start a80920b2a9110cf68688a799bb4c9eef1db01cfc366b5e1de1ea3c00e7c8fd5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_curran, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Feb 01 14:50:34 compute-0 podman[77957]: 2026-02-01 14:50:34.638848721 +0000 UTC m=+0.151011964 container attach a80920b2a9110cf68688a799bb4c9eef1db01cfc366b5e1de1ea3c00e7c8fd5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_curran, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:50:34 compute-0 sudo[78001]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owzizvfjcvchtlejcmoxkrlajuhnwdnq ; /usr/bin/python3'
Feb 01 14:50:34 compute-0 sudo[78001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:50:34 compute-0 python3[78003]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:50:35 compute-0 podman[78009]: 2026-02-01 14:50:35.014027127 +0000 UTC m=+0.071329859 container create 56b6984ceecd68f5631b0e08406079628c3ab617f4f2ca4c0d40f210d4325d9c (image=quay.io/ceph/ceph:v20, name=awesome_kapitsa, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:50:35 compute-0 systemd[1]: Started libpod-conmon-56b6984ceecd68f5631b0e08406079628c3ab617f4f2ca4c0d40f210d4325d9c.scope.
Feb 01 14:50:35 compute-0 ceph-mon[75179]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:35 compute-0 ceph-mon[75179]: Added label _admin to host compute-0
Feb 01 14:50:35 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/117713498' entity='client.admin' 
Feb 01 14:50:35 compute-0 podman[78009]: 2026-02-01 14:50:34.982494005 +0000 UTC m=+0.039796837 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:35 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3b56feefdd05c9c6949dcb1520c316e544b366b46804e2abf60303f6831df8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3b56feefdd05c9c6949dcb1520c316e544b366b46804e2abf60303f6831df8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:35 compute-0 podman[78009]: 2026-02-01 14:50:35.112172894 +0000 UTC m=+0.169475656 container init 56b6984ceecd68f5631b0e08406079628c3ab617f4f2ca4c0d40f210d4325d9c (image=quay.io/ceph/ceph:v20, name=awesome_kapitsa, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True)
Feb 01 14:50:35 compute-0 podman[78009]: 2026-02-01 14:50:35.12083617 +0000 UTC m=+0.178138922 container start 56b6984ceecd68f5631b0e08406079628c3ab617f4f2ca4c0d40f210d4325d9c (image=quay.io/ceph/ceph:v20, name=awesome_kapitsa, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 01 14:50:35 compute-0 podman[78009]: 2026-02-01 14:50:35.124562405 +0000 UTC m=+0.181865177 container attach 56b6984ceecd68f5631b0e08406079628c3ab617f4f2ca4c0d40f210d4325d9c (image=quay.io/ceph/ceph:v20, name=awesome_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 01 14:50:35 compute-0 compassionate_curran[77973]: [
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:     {
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:         "available": false,
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:         "being_replaced": false,
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:         "ceph_device_lvm": false,
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:         "device_id": "QEMU_DVD-ROM_QM00001",
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:         "lsm_data": {},
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:         "lvs": [],
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:         "path": "/dev/sr0",
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:         "rejected_reasons": [
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "Insufficient space (<5GB)",
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "Has a FileSystem"
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:         ],
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:         "sys_api": {
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "actuators": null,
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "device_nodes": [
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:                 "sr0"
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             ],
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "devname": "sr0",
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "human_readable_size": "482.00 KB",
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "id_bus": "ata",
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "model": "QEMU DVD-ROM",
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "nr_requests": "2",
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "parent": "/dev/sr0",
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "partitions": {},
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "path": "/dev/sr0",
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "removable": "1",
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "rev": "2.5+",
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "ro": "0",
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "rotational": "1",
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "sas_address": "",
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "sas_device_handle": "",
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "scheduler_mode": "mq-deadline",
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "sectors": 0,
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "sectorsize": "2048",
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "size": 493568.0,
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "support_discard": "2048",
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "type": "disk",
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:             "vendor": "QEMU"
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:         }
Feb 01 14:50:35 compute-0 compassionate_curran[77973]:     }
Feb 01 14:50:35 compute-0 compassionate_curran[77973]: ]
Feb 01 14:50:35 compute-0 systemd[1]: libpod-a80920b2a9110cf68688a799bb4c9eef1db01cfc366b5e1de1ea3c00e7c8fd5d.scope: Deactivated successfully.
Feb 01 14:50:35 compute-0 podman[77957]: 2026-02-01 14:50:35.197289383 +0000 UTC m=+0.709452536 container died a80920b2a9110cf68688a799bb4c9eef1db01cfc366b5e1de1ea3c00e7c8fd5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:50:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ded3bc62bc9074de5cf7dd65ca2a478401a3774b6962c10a9745b82c5d22660-merged.mount: Deactivated successfully.
Feb 01 14:50:35 compute-0 podman[77957]: 2026-02-01 14:50:35.235829553 +0000 UTC m=+0.747992696 container remove a80920b2a9110cf68688a799bb4c9eef1db01cfc366b5e1de1ea3c00e7c8fd5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_curran, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:50:35 compute-0 systemd[1]: libpod-conmon-a80920b2a9110cf68688a799bb4c9eef1db01cfc366b5e1de1ea3c00e7c8fd5d.scope: Deactivated successfully.
Feb 01 14:50:35 compute-0 sudo[77771]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:50:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:50:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:50:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:50:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 01 14:50:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 01 14:50:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:50:35 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:50:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 14:50:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:50:35 compute-0 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Feb 01 14:50:35 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Feb 01 14:50:35 compute-0 sudo[78750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 01 14:50:35 compute-0 sudo[78750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:35 compute-0 sudo[78750]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:50:35 compute-0 sudo[78775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/etc/ceph
Feb 01 14:50:35 compute-0 sudo[78775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:35 compute-0 sudo[78775]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:35 compute-0 sudo[78800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/etc/ceph/ceph.conf.new
Feb 01 14:50:35 compute-0 sudo[78800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:35 compute-0 sudo[78800]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:35 compute-0 sudo[78825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:50:35 compute-0 sudo[78825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:35 compute-0 sudo[78825]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:35 compute-0 sudo[78850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/etc/ceph/ceph.conf.new
Feb 01 14:50:35 compute-0 sudo[78850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:35 compute-0 sudo[78850]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Feb 01 14:50:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1889255596' entity='client.admin' 
Feb 01 14:50:35 compute-0 systemd[1]: libpod-56b6984ceecd68f5631b0e08406079628c3ab617f4f2ca4c0d40f210d4325d9c.scope: Deactivated successfully.
Feb 01 14:50:35 compute-0 podman[78009]: 2026-02-01 14:50:35.566723627 +0000 UTC m=+0.624026349 container died 56b6984ceecd68f5631b0e08406079628c3ab617f4f2ca4c0d40f210d4325d9c (image=quay.io/ceph/ceph:v20, name=awesome_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:50:35 compute-0 sudo[78898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/etc/ceph/ceph.conf.new
Feb 01 14:50:35 compute-0 sudo[78898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca3b56feefdd05c9c6949dcb1520c316e544b366b46804e2abf60303f6831df8-merged.mount: Deactivated successfully.
Feb 01 14:50:35 compute-0 sudo[78898]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:35 compute-0 podman[78009]: 2026-02-01 14:50:35.59722304 +0000 UTC m=+0.654525762 container remove 56b6984ceecd68f5631b0e08406079628c3ab617f4f2ca4c0d40f210d4325d9c (image=quay.io/ceph/ceph:v20, name=awesome_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:50:35 compute-0 systemd[1]: libpod-conmon-56b6984ceecd68f5631b0e08406079628c3ab617f4f2ca4c0d40f210d4325d9c.scope: Deactivated successfully.
Feb 01 14:50:35 compute-0 sudo[78001]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:35 compute-0 sudo[78936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/etc/ceph/ceph.conf.new
Feb 01 14:50:35 compute-0 sudo[78936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:35 compute-0 sudo[78936]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:35 compute-0 sudo[78961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 01 14:50:35 compute-0 sudo[78961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:35 compute-0 sudo[78961]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:35 compute-0 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.conf
Feb 01 14:50:35 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.conf
Feb 01 14:50:35 compute-0 ceph-mgr[75469]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 01 14:50:35 compute-0 sudo[78986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config
Feb 01 14:50:35 compute-0 sudo[78986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:35 compute-0 sudo[78986]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:35 compute-0 sudo[79011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config
Feb 01 14:50:35 compute-0 sudo[79011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:35 compute-0 sudo[79011]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:35 compute-0 sudo[79036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.conf.new
Feb 01 14:50:35 compute-0 sudo[79036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:35 compute-0 sudo[79036]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:35 compute-0 sudo[79061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:50:35 compute-0 sudo[79061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:35 compute-0 sudo[79061]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:35 compute-0 sudo[79096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.conf.new
Feb 01 14:50:35 compute-0 sudo[79096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:35 compute-0 sudo[79096]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:36 compute-0 sudo[79190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.conf.new
Feb 01 14:50:36 compute-0 sudo[79190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:36 compute-0 sudo[79190]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:36 compute-0 sudo[79234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.conf.new
Feb 01 14:50:36 compute-0 sudo[79234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:36 compute-0 sudo[79234]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:36 compute-0 sudo[79259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.conf.new /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.conf
Feb 01 14:50:36 compute-0 sudo[79259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:36 compute-0 sudo[79259]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:36 compute-0 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 01 14:50:36 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 01 14:50:36 compute-0 sudo[79284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 01 14:50:36 compute-0 sudo[79284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:36 compute-0 sudo[79284]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:36 compute-0 sudo[79332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/etc/ceph
Feb 01 14:50:36 compute-0 sudo[79332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:36 compute-0 sudo[79332]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:36 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:36 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:36 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:36 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:36 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 01 14:50:36 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:50:36 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:50:36 compute-0 ceph-mon[75179]: Updating compute-0:/etc/ceph/ceph.conf
Feb 01 14:50:36 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1889255596' entity='client.admin' 
Feb 01 14:50:36 compute-0 ceph-mon[75179]: Updating compute-0:/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.conf
Feb 01 14:50:36 compute-0 sudo[79381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/etc/ceph/ceph.client.admin.keyring.new
Feb 01 14:50:36 compute-0 sudo[79381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:36 compute-0 sudo[79381]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:36 compute-0 sudo[79429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkznqcvmaovpdlgihszgzqputmjnkuqj ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769957435.8967826-36400-270880312506536/async_wrapper.py j351000907281 30 /home/zuul/.ansible/tmp/ansible-tmp-1769957435.8967826-36400-270880312506536/AnsiballZ_command.py _'
Feb 01 14:50:36 compute-0 sudo[79429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:50:36 compute-0 sudo[79433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:50:36 compute-0 sudo[79433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:36 compute-0 sudo[79433]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:36 compute-0 sudo[79459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/etc/ceph/ceph.client.admin.keyring.new
Feb 01 14:50:36 compute-0 sudo[79459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:36 compute-0 sudo[79459]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:36 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:50:36 compute-0 ansible-async_wrapper.py[79434]: Invoked with j351000907281 30 /home/zuul/.ansible/tmp/ansible-tmp-1769957435.8967826-36400-270880312506536/AnsiballZ_command.py _
Feb 01 14:50:36 compute-0 ansible-async_wrapper.py[79509]: Starting module and watcher
Feb 01 14:50:36 compute-0 ansible-async_wrapper.py[79509]: Start watching 79510 (30)
Feb 01 14:50:36 compute-0 ansible-async_wrapper.py[79510]: Start module (79510)
Feb 01 14:50:36 compute-0 ansible-async_wrapper.py[79434]: Return async_wrapper task started.
Feb 01 14:50:36 compute-0 sudo[79429]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:36 compute-0 sudo[79511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/etc/ceph/ceph.client.admin.keyring.new
Feb 01 14:50:36 compute-0 sudo[79511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:36 compute-0 sudo[79511]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:36 compute-0 sudo[79537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/etc/ceph/ceph.client.admin.keyring.new
Feb 01 14:50:36 compute-0 sudo[79537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:36 compute-0 sudo[79537]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:36 compute-0 sudo[79562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Feb 01 14:50:36 compute-0 sudo[79562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:36 compute-0 sudo[79562]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:36 compute-0 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.client.admin.keyring
Feb 01 14:50:36 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.client.admin.keyring
Feb 01 14:50:36 compute-0 python3[79512]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:50:36 compute-0 sudo[79588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config
Feb 01 14:50:36 compute-0 sudo[79588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:36 compute-0 sudo[79588]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:36 compute-0 podman[79587]: 2026-02-01 14:50:36.673084564 +0000 UTC m=+0.042004530 container create dab1dc82eb4afa85f40fff689bbf549ba9b5638ff798f320d9100fae08c9eab9 (image=quay.io/ceph/ceph:v20, name=laughing_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Feb 01 14:50:36 compute-0 systemd[1]: Started libpod-conmon-dab1dc82eb4afa85f40fff689bbf549ba9b5638ff798f320d9100fae08c9eab9.scope.
Feb 01 14:50:36 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:36 compute-0 sudo[79624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config
Feb 01 14:50:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b73ce7a8b95741a11d17e73023e28b546866f9e4b02a1f23027b5faced8a52d4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b73ce7a8b95741a11d17e73023e28b546866f9e4b02a1f23027b5faced8a52d4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:36 compute-0 sudo[79624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:36 compute-0 sudo[79624]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:36 compute-0 podman[79587]: 2026-02-01 14:50:36.647995514 +0000 UTC m=+0.016915500 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:36 compute-0 podman[79587]: 2026-02-01 14:50:36.749216978 +0000 UTC m=+0.118136944 container init dab1dc82eb4afa85f40fff689bbf549ba9b5638ff798f320d9100fae08c9eab9 (image=quay.io/ceph/ceph:v20, name=laughing_panini, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:50:36 compute-0 podman[79587]: 2026-02-01 14:50:36.755981069 +0000 UTC m=+0.124901045 container start dab1dc82eb4afa85f40fff689bbf549ba9b5638ff798f320d9100fae08c9eab9 (image=quay.io/ceph/ceph:v20, name=laughing_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:50:36 compute-0 podman[79587]: 2026-02-01 14:50:36.763348218 +0000 UTC m=+0.132268174 container attach dab1dc82eb4afa85f40fff689bbf549ba9b5638ff798f320d9100fae08c9eab9 (image=quay.io/ceph/ceph:v20, name=laughing_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:36 compute-0 sudo[79657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.client.admin.keyring.new
Feb 01 14:50:36 compute-0 sudo[79657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:36 compute-0 sudo[79657]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:36 compute-0 sudo[79682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:50:36 compute-0 sudo[79682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:36 compute-0 sudo[79682]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:36 compute-0 sudo[79726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.client.admin.keyring.new
Feb 01 14:50:36 compute-0 sudo[79726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:36 compute-0 sudo[79726]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:37 compute-0 sudo[79774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.client.admin.keyring.new
Feb 01 14:50:37 compute-0 sudo[79774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:37 compute-0 sudo[79774]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:37 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 01 14:50:37 compute-0 laughing_panini[79651]: 
Feb 01 14:50:37 compute-0 laughing_panini[79651]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb 01 14:50:37 compute-0 sudo[79799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.client.admin.keyring.new
Feb 01 14:50:37 compute-0 sudo[79799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:37 compute-0 sudo[79799]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:37 compute-0 systemd[1]: libpod-dab1dc82eb4afa85f40fff689bbf549ba9b5638ff798f320d9100fae08c9eab9.scope: Deactivated successfully.
Feb 01 14:50:37 compute-0 podman[79587]: 2026-02-01 14:50:37.140878941 +0000 UTC m=+0.509798907 container died dab1dc82eb4afa85f40fff689bbf549ba9b5638ff798f320d9100fae08c9eab9 (image=quay.io/ceph/ceph:v20, name=laughing_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 01 14:50:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b73ce7a8b95741a11d17e73023e28b546866f9e4b02a1f23027b5faced8a52d4-merged.mount: Deactivated successfully.
Feb 01 14:50:37 compute-0 podman[79587]: 2026-02-01 14:50:37.178084124 +0000 UTC m=+0.547004090 container remove dab1dc82eb4afa85f40fff689bbf549ba9b5638ff798f320d9100fae08c9eab9 (image=quay.io/ceph/ceph:v20, name=laughing_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:50:37 compute-0 ansible-async_wrapper.py[79510]: Module complete (79510)
Feb 01 14:50:37 compute-0 systemd[1]: libpod-conmon-dab1dc82eb4afa85f40fff689bbf549ba9b5638ff798f320d9100fae08c9eab9.scope: Deactivated successfully.
Feb 01 14:50:37 compute-0 sudo[79827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.client.admin.keyring.new /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.client.admin.keyring
Feb 01 14:50:37 compute-0 sudo[79827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:37 compute-0 sudo[79827]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:50:37 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:50:37 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 14:50:37 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:37 compute-0 ceph-mgr[75469]: [progress INFO root] update: starting ev d78f5c1b-7fbd-477c-92ac-1b0c26828934 (Updating crash deployment (+1 -> 1))
Feb 01 14:50:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Feb 01 14:50:37 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 01 14:50:37 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb 01 14:50:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:50:37 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:50:37 compute-0 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Feb 01 14:50:37 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Feb 01 14:50:37 compute-0 ceph-mon[75179]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 01 14:50:37 compute-0 ceph-mon[75179]: Updating compute-0:/var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/config/ceph.client.admin.keyring
Feb 01 14:50:37 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:37 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:37 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:37 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 01 14:50:37 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb 01 14:50:37 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:50:37 compute-0 sudo[79863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:50:37 compute-0 sudo[79863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:37 compute-0 sudo[79863]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:37 compute-0 sudo[79888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:50:37 compute-0 sudo[79888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:37 compute-0 ceph-mgr[75469]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Feb 01 14:50:37 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:50:37 compute-0 ceph-mon[75179]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Feb 01 14:50:37 compute-0 sudo[79995]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuzbackygqgnrkwywivhoygfbvnivwhl ; /usr/bin/python3'
Feb 01 14:50:37 compute-0 sudo[79995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:50:37 compute-0 podman[80004]: 2026-02-01 14:50:37.825884495 +0000 UTC m=+0.051247982 container create 40531ed0ebb16ef0833e7c9afa8f36b24e901344d69f243deb81a6574cd3fff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:50:37 compute-0 systemd[1]: Started libpod-conmon-40531ed0ebb16ef0833e7c9afa8f36b24e901344d69f243deb81a6574cd3fff9.scope.
Feb 01 14:50:37 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:37 compute-0 podman[80004]: 2026-02-01 14:50:37.797041978 +0000 UTC m=+0.022405495 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:50:37 compute-0 python3[80002]: ansible-ansible.legacy.async_status Invoked with jid=j351000907281.79434 mode=status _async_dir=/root/.ansible_async
Feb 01 14:50:37 compute-0 podman[80004]: 2026-02-01 14:50:37.899607981 +0000 UTC m=+0.124971468 container init 40531ed0ebb16ef0833e7c9afa8f36b24e901344d69f243deb81a6574cd3fff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Feb 01 14:50:37 compute-0 sudo[79995]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:37 compute-0 podman[80004]: 2026-02-01 14:50:37.912195967 +0000 UTC m=+0.137559464 container start 40531ed0ebb16ef0833e7c9afa8f36b24e901344d69f243deb81a6574cd3fff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_chebyshev, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 01 14:50:37 compute-0 eloquent_chebyshev[80021]: 167 167
Feb 01 14:50:37 compute-0 systemd[1]: libpod-40531ed0ebb16ef0833e7c9afa8f36b24e901344d69f243deb81a6574cd3fff9.scope: Deactivated successfully.
Feb 01 14:50:37 compute-0 conmon[80021]: conmon 40531ed0ebb16ef0833e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-40531ed0ebb16ef0833e7c9afa8f36b24e901344d69f243deb81a6574cd3fff9.scope/container/memory.events
Feb 01 14:50:37 compute-0 podman[80004]: 2026-02-01 14:50:37.916237571 +0000 UTC m=+0.141601078 container attach 40531ed0ebb16ef0833e7c9afa8f36b24e901344d69f243deb81a6574cd3fff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 01 14:50:37 compute-0 podman[80004]: 2026-02-01 14:50:37.916606412 +0000 UTC m=+0.141969879 container died 40531ed0ebb16ef0833e7c9afa8f36b24e901344d69f243deb81a6574cd3fff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:50:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-6244cb21e32c9b894c2a7cd9bac800c27e9142af7b9e62c025bafae637e91699-merged.mount: Deactivated successfully.
Feb 01 14:50:37 compute-0 podman[80004]: 2026-02-01 14:50:37.963033264 +0000 UTC m=+0.188396731 container remove 40531ed0ebb16ef0833e7c9afa8f36b24e901344d69f243deb81a6574cd3fff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_chebyshev, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 01 14:50:37 compute-0 systemd[1]: libpod-conmon-40531ed0ebb16ef0833e7c9afa8f36b24e901344d69f243deb81a6574cd3fff9.scope: Deactivated successfully.
Feb 01 14:50:38 compute-0 systemd[1]: Reloading.
Feb 01 14:50:38 compute-0 systemd-rc-local-generator[80109]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:50:38 compute-0 systemd-sysv-generator[80112]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:50:38 compute-0 sudo[80086]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkildvjcrwpychicnzpfqdbdrwytttgi ; /usr/bin/python3'
Feb 01 14:50:38 compute-0 sudo[80086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:50:38 compute-0 systemd[1]: Reloading.
Feb 01 14:50:38 compute-0 ceph-mon[75179]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 01 14:50:38 compute-0 ceph-mon[75179]: Deploying daemon crash.compute-0 on compute-0
Feb 01 14:50:38 compute-0 ceph-mon[75179]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:50:38 compute-0 ceph-mon[75179]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Feb 01 14:50:38 compute-0 systemd-rc-local-generator[80143]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:50:38 compute-0 systemd-sysv-generator[80150]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:50:38 compute-0 python3[80123]: ansible-ansible.legacy.async_status Invoked with jid=j351000907281.79434 mode=cleanup _async_dir=/root/.ansible_async
Feb 01 14:50:38 compute-0 sudo[80086]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:38 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:50:38 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb 01 14:50:38 compute-0 podman[80212]: 2026-02-01 14:50:38.667934121 +0000 UTC m=+0.052937039 container create 9bd6536237272ef86723b3eaf8a56f29fc7565963ed6c6d016eb80c9c8c15825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:50:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aa05ab3d03ad6e19ad26e698ba5de43b255f158b71fed164b628eac3d9658fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aa05ab3d03ad6e19ad26e698ba5de43b255f158b71fed164b628eac3d9658fe/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aa05ab3d03ad6e19ad26e698ba5de43b255f158b71fed164b628eac3d9658fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aa05ab3d03ad6e19ad26e698ba5de43b255f158b71fed164b628eac3d9658fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:38 compute-0 podman[80212]: 2026-02-01 14:50:38.733686352 +0000 UTC m=+0.118689320 container init 9bd6536237272ef86723b3eaf8a56f29fc7565963ed6c6d016eb80c9c8c15825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 01 14:50:38 compute-0 sudo[80253]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqwayhfbpwjtyqgtvlxcpfmcvgkrdypz ; /usr/bin/python3'
Feb 01 14:50:38 compute-0 podman[80212]: 2026-02-01 14:50:38.738989172 +0000 UTC m=+0.123992090 container start 9bd6536237272ef86723b3eaf8a56f29fc7565963ed6c6d016eb80c9c8c15825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:50:38 compute-0 podman[80212]: 2026-02-01 14:50:38.645114775 +0000 UTC m=+0.030117733 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:50:38 compute-0 sudo[80253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:50:38 compute-0 bash[80212]: 9bd6536237272ef86723b3eaf8a56f29fc7565963ed6c6d016eb80c9c8c15825
Feb 01 14:50:38 compute-0 systemd[1]: Started Ceph crash.compute-0 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb 01 14:50:38 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0[80233]: INFO:ceph-crash:pinging cluster to exercise our key
Feb 01 14:50:38 compute-0 sudo[79888]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:50:38 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:50:38 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb 01 14:50:38 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:38 compute-0 ceph-mgr[75469]: [progress INFO root] complete: finished ev d78f5c1b-7fbd-477c-92ac-1b0c26828934 (Updating crash deployment (+1 -> 1))
Feb 01 14:50:38 compute-0 ceph-mgr[75469]: [progress INFO root] Completed event d78f5c1b-7fbd-477c-92ac-1b0c26828934 (Updating crash deployment (+1 -> 1)) in 2 seconds
Feb 01 14:50:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb 01 14:50:38 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb 01 14:50:38 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:38 compute-0 ceph-mgr[75469]: [progress INFO root] update: starting ev d9ea7757-a6e5-4932-8936-6b3fa39a3c39 (Updating mgr deployment (+1 -> 2))
Feb 01 14:50:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.rdxlja", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb 01 14:50:38 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.rdxlja", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 01 14:50:38 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rdxlja", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb 01 14:50:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 01 14:50:38 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mgr services"} : dispatch
Feb 01 14:50:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:50:38 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:50:38 compute-0 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.rdxlja on compute-0
Feb 01 14:50:38 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.rdxlja on compute-0
Feb 01 14:50:38 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0[80233]: 2026-02-01T14:50:38.876+0000 7fd39659a640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Feb 01 14:50:38 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0[80233]: 2026-02-01T14:50:38.876+0000 7fd39659a640 -1 AuthRegistry(0x7fd390052930) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Feb 01 14:50:38 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0[80233]: 2026-02-01T14:50:38.878+0000 7fd39659a640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Feb 01 14:50:38 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0[80233]: 2026-02-01T14:50:38.878+0000 7fd39659a640 -1 AuthRegistry(0x7fd396598fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Feb 01 14:50:38 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0[80233]: 2026-02-01T14:50:38.879+0000 7fd38ffff640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Feb 01 14:50:38 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0[80233]: 2026-02-01T14:50:38.880+0000 7fd39659a640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Feb 01 14:50:38 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0[80233]: [errno 13] RADOS permission denied (error connecting to the cluster)
Feb 01 14:50:38 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-crash-compute-0[80233]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Feb 01 14:50:38 compute-0 python3[80257]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 01 14:50:38 compute-0 sudo[80253]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:38 compute-0 sudo[80260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:50:38 compute-0 sudo[80260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:38 compute-0 sudo[80260]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:38 compute-0 sudo[80297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:50:38 compute-0 sudo[80297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:39 compute-0 sudo[80361]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzbwtbfbbkjpcrcjlynyqqrdvyyjfuxw ; /usr/bin/python3'
Feb 01 14:50:39 compute-0 sudo[80361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:50:39 compute-0 podman[80388]: 2026-02-01 14:50:39.392391141 +0000 UTC m=+0.050838420 container create c54e3657ba32d34673e0afa3acfcc06e15c2ac9d774c87fdcc75ace3edb85e6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:50:39 compute-0 python3[80370]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:50:39 compute-0 systemd[1]: Started libpod-conmon-c54e3657ba32d34673e0afa3acfcc06e15c2ac9d774c87fdcc75ace3edb85e6b.scope.
Feb 01 14:50:39 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:39 compute-0 podman[80402]: 2026-02-01 14:50:39.455006273 +0000 UTC m=+0.035274379 container create bf23f3e237282b644a9311bc222cda26c7cf567ca5d63d133f423b08714689e1 (image=quay.io/ceph/ceph:v20, name=zealous_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 01 14:50:39 compute-0 podman[80388]: 2026-02-01 14:50:39.369143753 +0000 UTC m=+0.027591122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:50:39 compute-0 podman[80388]: 2026-02-01 14:50:39.468716821 +0000 UTC m=+0.127164130 container init c54e3657ba32d34673e0afa3acfcc06e15c2ac9d774c87fdcc75ace3edb85e6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hermann, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:39 compute-0 podman[80388]: 2026-02-01 14:50:39.47397177 +0000 UTC m=+0.132419059 container start c54e3657ba32d34673e0afa3acfcc06e15c2ac9d774c87fdcc75ace3edb85e6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hermann, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:39 compute-0 podman[80388]: 2026-02-01 14:50:39.476835021 +0000 UTC m=+0.135282310 container attach c54e3657ba32d34673e0afa3acfcc06e15c2ac9d774c87fdcc75ace3edb85e6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hermann, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 01 14:50:39 compute-0 elastic_hermann[80416]: 167 167
Feb 01 14:50:39 compute-0 podman[80388]: 2026-02-01 14:50:39.477990343 +0000 UTC m=+0.136437622 container died c54e3657ba32d34673e0afa3acfcc06e15c2ac9d774c87fdcc75ace3edb85e6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hermann, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:50:39 compute-0 systemd[1]: Started libpod-conmon-bf23f3e237282b644a9311bc222cda26c7cf567ca5d63d133f423b08714689e1.scope.
Feb 01 14:50:39 compute-0 systemd[1]: libpod-c54e3657ba32d34673e0afa3acfcc06e15c2ac9d774c87fdcc75ace3edb85e6b.scope: Deactivated successfully.
Feb 01 14:50:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e985f998fbf9d88d5079f64574a61b6c0b88c047c2965606611060653933252-merged.mount: Deactivated successfully.
Feb 01 14:50:39 compute-0 podman[80388]: 2026-02-01 14:50:39.511851511 +0000 UTC m=+0.170298780 container remove c54e3657ba32d34673e0afa3acfcc06e15c2ac9d774c87fdcc75ace3edb85e6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hermann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 01 14:50:39 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:39 compute-0 systemd[1]: libpod-conmon-c54e3657ba32d34673e0afa3acfcc06e15c2ac9d774c87fdcc75ace3edb85e6b.scope: Deactivated successfully.
Feb 01 14:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80189b7518f542db910caed96867e9c25dd5cdd5e9b12fe41a073faf2c540e22/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80189b7518f542db910caed96867e9c25dd5cdd5e9b12fe41a073faf2c540e22/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80189b7518f542db910caed96867e9c25dd5cdd5e9b12fe41a073faf2c540e22/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:39 compute-0 podman[80402]: 2026-02-01 14:50:39.438446124 +0000 UTC m=+0.018714250 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:39 compute-0 podman[80402]: 2026-02-01 14:50:39.539494794 +0000 UTC m=+0.119762920 container init bf23f3e237282b644a9311bc222cda26c7cf567ca5d63d133f423b08714689e1 (image=quay.io/ceph/ceph:v20, name=zealous_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 01 14:50:39 compute-0 podman[80402]: 2026-02-01 14:50:39.544643689 +0000 UTC m=+0.124911805 container start bf23f3e237282b644a9311bc222cda26c7cf567ca5d63d133f423b08714689e1 (image=quay.io/ceph/ceph:v20, name=zealous_roentgen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 01 14:50:39 compute-0 podman[80402]: 2026-02-01 14:50:39.547588803 +0000 UTC m=+0.127856919 container attach bf23f3e237282b644a9311bc222cda26c7cf567ca5d63d133f423b08714689e1 (image=quay.io/ceph/ceph:v20, name=zealous_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 01 14:50:39 compute-0 systemd[1]: Reloading.
Feb 01 14:50:39 compute-0 systemd-rc-local-generator[80468]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:50:39 compute-0 systemd-sysv-generator[80471]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:50:39 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:50:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.rdxlja", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 01 14:50:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rdxlja", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb 01 14:50:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mgr services"} : dispatch
Feb 01 14:50:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:50:39 compute-0 ceph-mon[75179]: Deploying daemon mgr.compute-0.rdxlja on compute-0
Feb 01 14:50:39 compute-0 systemd[1]: Reloading.
Feb 01 14:50:39 compute-0 systemd-rc-local-generator[80522]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:50:39 compute-0 systemd-sysv-generator[80525]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:50:39 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 01 14:50:39 compute-0 zealous_roentgen[80432]: 
Feb 01 14:50:39 compute-0 zealous_roentgen[80432]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb 01 14:50:39 compute-0 podman[80402]: 2026-02-01 14:50:39.953874719 +0000 UTC m=+0.534142835 container died bf23f3e237282b644a9311bc222cda26c7cf567ca5d63d133f423b08714689e1 (image=quay.io/ceph/ceph:v20, name=zealous_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 01 14:50:40 compute-0 systemd[1]: libpod-bf23f3e237282b644a9311bc222cda26c7cf567ca5d63d133f423b08714689e1.scope: Deactivated successfully.
Feb 01 14:50:40 compute-0 podman[80402]: 2026-02-01 14:50:40.055276949 +0000 UTC m=+0.635545055 container remove bf23f3e237282b644a9311bc222cda26c7cf567ca5d63d133f423b08714689e1 (image=quay.io/ceph/ceph:v20, name=zealous_roentgen, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:50:40 compute-0 systemd[1]: Starting Ceph mgr.compute-0.rdxlja for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb 01 14:50:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-80189b7518f542db910caed96867e9c25dd5cdd5e9b12fe41a073faf2c540e22-merged.mount: Deactivated successfully.
Feb 01 14:50:40 compute-0 systemd[1]: libpod-conmon-bf23f3e237282b644a9311bc222cda26c7cf567ca5d63d133f423b08714689e1.scope: Deactivated successfully.
Feb 01 14:50:40 compute-0 sudo[80361]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:40 compute-0 podman[80602]: 2026-02-01 14:50:40.321019429 +0000 UTC m=+0.052804906 container create 7d94e38ece1e596fcae1a22687cbf142d9da3658707ee62f5feff59dab4a0686 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-rdxlja, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 01 14:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/193743347c78a8686387b1b48b1e59b45f5b5e6c93d270c616f42c5638d1a374/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/193743347c78a8686387b1b48b1e59b45f5b5e6c93d270c616f42c5638d1a374/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/193743347c78a8686387b1b48b1e59b45f5b5e6c93d270c616f42c5638d1a374/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/193743347c78a8686387b1b48b1e59b45f5b5e6c93d270c616f42c5638d1a374/merged/var/lib/ceph/mgr/ceph-compute-0.rdxlja supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:40 compute-0 podman[80602]: 2026-02-01 14:50:40.295172127 +0000 UTC m=+0.026957714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:50:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:50:40 compute-0 podman[80602]: 2026-02-01 14:50:40.400561949 +0000 UTC m=+0.132347506 container init 7d94e38ece1e596fcae1a22687cbf142d9da3658707ee62f5feff59dab4a0686 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-rdxlja, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:40 compute-0 podman[80602]: 2026-02-01 14:50:40.411368115 +0000 UTC m=+0.143153622 container start 7d94e38ece1e596fcae1a22687cbf142d9da3658707ee62f5feff59dab4a0686 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-rdxlja, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 01 14:50:40 compute-0 bash[80602]: 7d94e38ece1e596fcae1a22687cbf142d9da3658707ee62f5feff59dab4a0686
Feb 01 14:50:40 compute-0 systemd[1]: Started Ceph mgr.compute-0.rdxlja for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb 01 14:50:40 compute-0 sudo[80644]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dljuuuuhbtmoyuhofeedmkpuvlwpidtx ; /usr/bin/python3'
Feb 01 14:50:40 compute-0 sudo[80644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:50:40 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:50:40 compute-0 sudo[80297]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:50:40 compute-0 ceph-mgr[80645]: set uid:gid to 167:167 (ceph:ceph)
Feb 01 14:50:40 compute-0 ceph-mgr[80645]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Feb 01 14:50:40 compute-0 ceph-mgr[80645]: pidfile_write: ignore empty --pid-file
Feb 01 14:50:40 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:50:40 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 01 14:50:40 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:40 compute-0 ceph-mgr[75469]: [progress INFO root] complete: finished ev d9ea7757-a6e5-4932-8936-6b3fa39a3c39 (Updating mgr deployment (+1 -> 2))
Feb 01 14:50:40 compute-0 ceph-mgr[75469]: [progress INFO root] Completed event d9ea7757-a6e5-4932-8936-6b3fa39a3c39 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Feb 01 14:50:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 01 14:50:40 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:40 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'alerts'
Feb 01 14:50:40 compute-0 sudo[80668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 14:50:40 compute-0 sudo[80668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:40 compute-0 sudo[80668]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:40 compute-0 python3[80647]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:50:40 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'balancer'
Feb 01 14:50:40 compute-0 sudo[80693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:50:40 compute-0 sudo[80693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:40 compute-0 sudo[80693]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:40 compute-0 podman[80716]: 2026-02-01 14:50:40.659135266 +0000 UTC m=+0.049058959 container create 15618cb4aed80d73a7de1eaa816ac83c5ecf93160fef71db7bc189582994fbdf (image=quay.io/ceph/ceph:v20, name=peaceful_spence, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:40 compute-0 sudo[80726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 01 14:50:40 compute-0 sudo[80726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:40 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'cephadm'
Feb 01 14:50:40 compute-0 systemd[1]: Started libpod-conmon-15618cb4aed80d73a7de1eaa816ac83c5ecf93160fef71db7bc189582994fbdf.scope.
Feb 01 14:50:40 compute-0 podman[80716]: 2026-02-01 14:50:40.634994013 +0000 UTC m=+0.024917706 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:40 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d74b70ae94a8faa0b6d96c71df9c912adb557bff9d9c3f703452bde108c49c0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d74b70ae94a8faa0b6d96c71df9c912adb557bff9d9c3f703452bde108c49c0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d74b70ae94a8faa0b6d96c71df9c912adb557bff9d9c3f703452bde108c49c0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:40 compute-0 podman[80716]: 2026-02-01 14:50:40.759332392 +0000 UTC m=+0.149256155 container init 15618cb4aed80d73a7de1eaa816ac83c5ecf93160fef71db7bc189582994fbdf (image=quay.io/ceph/ceph:v20, name=peaceful_spence, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 01 14:50:40 compute-0 podman[80716]: 2026-02-01 14:50:40.766632438 +0000 UTC m=+0.156556091 container start 15618cb4aed80d73a7de1eaa816ac83c5ecf93160fef71db7bc189582994fbdf (image=quay.io/ceph/ceph:v20, name=peaceful_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:50:40 compute-0 podman[80716]: 2026-02-01 14:50:40.769749266 +0000 UTC m=+0.159672959 container attach 15618cb4aed80d73a7de1eaa816ac83c5ecf93160fef71db7bc189582994fbdf (image=quay.io/ceph/ceph:v20, name=peaceful_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:50:40 compute-0 ceph-mon[75179]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:50:40 compute-0 ceph-mon[75179]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 01 14:50:40 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:40 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:40 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:40 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:41 compute-0 podman[80828]: 2026-02-01 14:50:41.110103562 +0000 UTC m=+0.063413469 container exec 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 01 14:50:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Feb 01 14:50:41 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3286271513' entity='client.admin' 
Feb 01 14:50:41 compute-0 systemd[1]: libpod-15618cb4aed80d73a7de1eaa816ac83c5ecf93160fef71db7bc189582994fbdf.scope: Deactivated successfully.
Feb 01 14:50:41 compute-0 podman[80716]: 2026-02-01 14:50:41.166919215 +0000 UTC m=+0.556842928 container died 15618cb4aed80d73a7de1eaa816ac83c5ecf93160fef71db7bc189582994fbdf (image=quay.io/ceph/ceph:v20, name=peaceful_spence, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 01 14:50:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d74b70ae94a8faa0b6d96c71df9c912adb557bff9d9c3f703452bde108c49c0-merged.mount: Deactivated successfully.
Feb 01 14:50:41 compute-0 podman[80716]: 2026-02-01 14:50:41.21268511 +0000 UTC m=+0.602608803 container remove 15618cb4aed80d73a7de1eaa816ac83c5ecf93160fef71db7bc189582994fbdf (image=quay.io/ceph/ceph:v20, name=peaceful_spence, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:50:41 compute-0 podman[80828]: 2026-02-01 14:50:41.21302354 +0000 UTC m=+0.166333427 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 01 14:50:41 compute-0 sudo[80644]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:41 compute-0 systemd[1]: libpod-conmon-15618cb4aed80d73a7de1eaa816ac83c5ecf93160fef71db7bc189582994fbdf.scope: Deactivated successfully.
Feb 01 14:50:41 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'crash'
Feb 01 14:50:41 compute-0 sudo[80944]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rywpbwabtlypccideiyocuelmyeedvlx ; /usr/bin/python3'
Feb 01 14:50:41 compute-0 sudo[80944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:50:41 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'dashboard'
Feb 01 14:50:41 compute-0 ansible-async_wrapper.py[79509]: Done in kid B.
Feb 01 14:50:41 compute-0 python3[80949]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:50:41 compute-0 podman[80968]: 2026-02-01 14:50:41.571475344 +0000 UTC m=+0.040844650 container create 6af9b855651fdbe2cddc29c1cf6447d33f67a4d9c09a25c362498da38037c744 (image=quay.io/ceph/ceph:v20, name=stupefied_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 01 14:50:41 compute-0 systemd[1]: Started libpod-conmon-6af9b855651fdbe2cddc29c1cf6447d33f67a4d9c09a25c362498da38037c744.scope.
Feb 01 14:50:41 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0045346ac2410a2a60cb1671d16fca276a060046dc4455a8953306514ca4e3f0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0045346ac2410a2a60cb1671d16fca276a060046dc4455a8953306514ca4e3f0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0045346ac2410a2a60cb1671d16fca276a060046dc4455a8953306514ca4e3f0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:41 compute-0 sudo[80726]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:50:41 compute-0 podman[80968]: 2026-02-01 14:50:41.640829088 +0000 UTC m=+0.110198414 container init 6af9b855651fdbe2cddc29c1cf6447d33f67a4d9c09a25c362498da38037c744 (image=quay.io/ceph/ceph:v20, name=stupefied_dubinsky, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:50:41 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:50:41 compute-0 podman[80968]: 2026-02-01 14:50:41.645363203 +0000 UTC m=+0.114732519 container start 6af9b855651fdbe2cddc29c1cf6447d33f67a4d9c09a25c362498da38037c744 (image=quay.io/ceph/ceph:v20, name=stupefied_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 01 14:50:41 compute-0 podman[80968]: 2026-02-01 14:50:41.553981196 +0000 UTC m=+0.023350622 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:41 compute-0 podman[80968]: 2026-02-01 14:50:41.650485684 +0000 UTC m=+0.119855010 container attach 6af9b855651fdbe2cddc29c1cf6447d33f67a4d9c09a25c362498da38037c744 (image=quay.io/ceph/ceph:v20, name=stupefied_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:50:41 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:50:41 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:50:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 14:50:41 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:50:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 14:50:41 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:41 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:50:41 compute-0 sudo[81001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 14:50:41 compute-0 sudo[81001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:41 compute-0 sudo[81001]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:41 compute-0 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Feb 01 14:50:41 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Feb 01 14:50:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Feb 01 14:50:41 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 01 14:50:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Feb 01 14:50:41 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 01 14:50:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:50:41 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:50:41 compute-0 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Feb 01 14:50:41 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Feb 01 14:50:41 compute-0 sudo[81045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:50:41 compute-0 sudo[81045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:41 compute-0 sudo[81045]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:41 compute-0 sudo[81070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:50:41 compute-0 sudo[81070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:42 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Feb 01 14:50:42 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4238767418' entity='client.admin' 
Feb 01 14:50:42 compute-0 systemd[1]: libpod-6af9b855651fdbe2cddc29c1cf6447d33f67a4d9c09a25c362498da38037c744.scope: Deactivated successfully.
Feb 01 14:50:42 compute-0 podman[80968]: 2026-02-01 14:50:42.03561016 +0000 UTC m=+0.504979486 container died 6af9b855651fdbe2cddc29c1cf6447d33f67a4d9c09a25c362498da38037c744 (image=quay.io/ceph/ceph:v20, name=stupefied_dubinsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 01 14:50:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0045346ac2410a2a60cb1671d16fca276a060046dc4455a8953306514ca4e3f0-merged.mount: Deactivated successfully.
Feb 01 14:50:42 compute-0 podman[80968]: 2026-02-01 14:50:42.068999468 +0000 UTC m=+0.538368774 container remove 6af9b855651fdbe2cddc29c1cf6447d33f67a4d9c09a25c362498da38037c744 (image=quay.io/ceph/ceph:v20, name=stupefied_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:50:42 compute-0 systemd[1]: libpod-conmon-6af9b855651fdbe2cddc29c1cf6447d33f67a4d9c09a25c362498da38037c744.scope: Deactivated successfully.
Feb 01 14:50:42 compute-0 sudo[80944]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:42 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'devicehealth'
Feb 01 14:50:42 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3286271513' entity='client.admin' 
Feb 01 14:50:42 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:42 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:42 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:50:42 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:50:42 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:42 compute-0 ceph-mon[75179]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:50:42 compute-0 ceph-mon[75179]: Reconfiguring mon.compute-0 (unknown last config time)...
Feb 01 14:50:42 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 01 14:50:42 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 01 14:50:42 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:50:42 compute-0 ceph-mon[75179]: Reconfiguring daemon mon.compute-0 on compute-0
Feb 01 14:50:42 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/4238767418' entity='client.admin' 
Feb 01 14:50:42 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'diskprediction_local'
Feb 01 14:50:42 compute-0 podman[81126]: 2026-02-01 14:50:42.173271516 +0000 UTC m=+0.046118076 container create 5fe3426b57957d2e6f6e279440cb175eab015e579606f534bf9166d56690b87e (image=quay.io/ceph/ceph:v20, name=boring_elion, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:42 compute-0 systemd[1]: Started libpod-conmon-5fe3426b57957d2e6f6e279440cb175eab015e579606f534bf9166d56690b87e.scope.
Feb 01 14:50:42 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:42 compute-0 podman[81126]: 2026-02-01 14:50:42.14604314 +0000 UTC m=+0.018889710 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:42 compute-0 podman[81126]: 2026-02-01 14:50:42.250525624 +0000 UTC m=+0.123372244 container init 5fe3426b57957d2e6f6e279440cb175eab015e579606f534bf9166d56690b87e (image=quay.io/ceph/ceph:v20, name=boring_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:50:42 compute-0 podman[81126]: 2026-02-01 14:50:42.257163131 +0000 UTC m=+0.130009691 container start 5fe3426b57957d2e6f6e279440cb175eab015e579606f534bf9166d56690b87e (image=quay.io/ceph/ceph:v20, name=boring_elion, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 01 14:50:42 compute-0 podman[81126]: 2026-02-01 14:50:42.260575962 +0000 UTC m=+0.133422552 container attach 5fe3426b57957d2e6f6e279440cb175eab015e579606f534bf9166d56690b87e (image=quay.io/ceph/ceph:v20, name=boring_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:50:42 compute-0 boring_elion[81143]: 167 167
Feb 01 14:50:42 compute-0 systemd[1]: libpod-5fe3426b57957d2e6f6e279440cb175eab015e579606f534bf9166d56690b87e.scope: Deactivated successfully.
Feb 01 14:50:42 compute-0 podman[81126]: 2026-02-01 14:50:42.263792777 +0000 UTC m=+0.136639377 container died 5fe3426b57957d2e6f6e279440cb175eab015e579606f534bf9166d56690b87e (image=quay.io/ceph/ceph:v20, name=boring_elion, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 01 14:50:42 compute-0 sudo[81171]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckhmddlxbvdudywdepfheafsrishlebs ; /usr/bin/python3'
Feb 01 14:50:42 compute-0 sudo[81171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:50:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0bcc7ca7b8a9ff4ed43aa01e09310cb8f3e09c166e094d73fe5b268bf59d12e-merged.mount: Deactivated successfully.
Feb 01 14:50:42 compute-0 podman[81126]: 2026-02-01 14:50:42.308211993 +0000 UTC m=+0.181058553 container remove 5fe3426b57957d2e6f6e279440cb175eab015e579606f534bf9166d56690b87e (image=quay.io/ceph/ceph:v20, name=boring_elion, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:50:42 compute-0 systemd[1]: libpod-conmon-5fe3426b57957d2e6f6e279440cb175eab015e579606f534bf9166d56690b87e.scope: Deactivated successfully.
Feb 01 14:50:42 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-rdxlja[80617]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb 01 14:50:42 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-rdxlja[80617]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb 01 14:50:42 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-rdxlja[80617]:   from numpy import show_config as show_numpy_config
Feb 01 14:50:42 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'influx'
Feb 01 14:50:42 compute-0 sudo[81070]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:42 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:50:42 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:42 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:50:42 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:42 compute-0 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.viosrg (unknown last config time)...
Feb 01 14:50:42 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.viosrg (unknown last config time)...
Feb 01 14:50:42 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.viosrg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb 01 14:50:42 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.viosrg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 01 14:50:42 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 01 14:50:42 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mgr services"} : dispatch
Feb 01 14:50:42 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:50:42 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:50:42 compute-0 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.viosrg on compute-0
Feb 01 14:50:42 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.viosrg on compute-0
Feb 01 14:50:42 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'insights'
Feb 01 14:50:42 compute-0 sudo[81186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:50:42 compute-0 sudo[81186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:42 compute-0 sudo[81186]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:42 compute-0 python3[81180]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:50:42 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:50:42 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'iostat'
Feb 01 14:50:42 compute-0 sudo[81211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:50:42 compute-0 sudo[81211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:42 compute-0 podman[81228]: 2026-02-01 14:50:42.488920114 +0000 UTC m=+0.037791350 container create 3b5d59a454ec483504385c899dc7be032a4a417c8f7d50c9fc34b09dc620f3ae (image=quay.io/ceph/ceph:v20, name=vigorous_austin, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:50:42 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'k8sevents'
Feb 01 14:50:42 compute-0 systemd[1]: Started libpod-conmon-3b5d59a454ec483504385c899dc7be032a4a417c8f7d50c9fc34b09dc620f3ae.scope.
Feb 01 14:50:42 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d4089cf7006ac7d282edbc91dd2533c10173438a6aff0fdd0220a982266cf8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d4089cf7006ac7d282edbc91dd2533c10173438a6aff0fdd0220a982266cf8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d4089cf7006ac7d282edbc91dd2533c10173438a6aff0fdd0220a982266cf8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:42 compute-0 podman[81228]: 2026-02-01 14:50:42.566141301 +0000 UTC m=+0.115012567 container init 3b5d59a454ec483504385c899dc7be032a4a417c8f7d50c9fc34b09dc620f3ae (image=quay.io/ceph/ceph:v20, name=vigorous_austin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:50:42 compute-0 podman[81228]: 2026-02-01 14:50:42.473652622 +0000 UTC m=+0.022523868 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:42 compute-0 podman[81228]: 2026-02-01 14:50:42.57387237 +0000 UTC m=+0.122743636 container start 3b5d59a454ec483504385c899dc7be032a4a417c8f7d50c9fc34b09dc620f3ae (image=quay.io/ceph/ceph:v20, name=vigorous_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:42 compute-0 podman[81228]: 2026-02-01 14:50:42.577477707 +0000 UTC m=+0.126348973 container attach 3b5d59a454ec483504385c899dc7be032a4a417c8f7d50c9fc34b09dc620f3ae (image=quay.io/ceph/ceph:v20, name=vigorous_austin, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:50:42 compute-0 podman[81289]: 2026-02-01 14:50:42.774855172 +0000 UTC m=+0.051376902 container create 46f6d9822d7c8225815f91ca913c5306c19c5709e081424e6e20d13c629cf0c0 (image=quay.io/ceph/ceph:v20, name=thirsty_newton, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:42 compute-0 systemd[1]: Started libpod-conmon-46f6d9822d7c8225815f91ca913c5306c19c5709e081424e6e20d13c629cf0c0.scope.
Feb 01 14:50:42 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:42 compute-0 podman[81289]: 2026-02-01 14:50:42.826355487 +0000 UTC m=+0.102877217 container init 46f6d9822d7c8225815f91ca913c5306c19c5709e081424e6e20d13c629cf0c0 (image=quay.io/ceph/ceph:v20, name=thirsty_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:50:42 compute-0 podman[81289]: 2026-02-01 14:50:42.830467739 +0000 UTC m=+0.106989489 container start 46f6d9822d7c8225815f91ca913c5306c19c5709e081424e6e20d13c629cf0c0 (image=quay.io/ceph/ceph:v20, name=thirsty_newton, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:50:42 compute-0 thirsty_newton[81305]: 167 167
Feb 01 14:50:42 compute-0 podman[81289]: 2026-02-01 14:50:42.833830649 +0000 UTC m=+0.110352379 container attach 46f6d9822d7c8225815f91ca913c5306c19c5709e081424e6e20d13c629cf0c0 (image=quay.io/ceph/ceph:v20, name=thirsty_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:50:42 compute-0 systemd[1]: libpod-46f6d9822d7c8225815f91ca913c5306c19c5709e081424e6e20d13c629cf0c0.scope: Deactivated successfully.
Feb 01 14:50:42 compute-0 podman[81289]: 2026-02-01 14:50:42.835352814 +0000 UTC m=+0.111874544 container died 46f6d9822d7c8225815f91ca913c5306c19c5709e081424e6e20d13c629cf0c0 (image=quay.io/ceph/ceph:v20, name=thirsty_newton, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:50:42 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'localpool'
Feb 01 14:50:42 compute-0 podman[81289]: 2026-02-01 14:50:42.751878762 +0000 UTC m=+0.028400582 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:42 compute-0 podman[81289]: 2026-02-01 14:50:42.874416691 +0000 UTC m=+0.150938421 container remove 46f6d9822d7c8225815f91ca913c5306c19c5709e081424e6e20d13c629cf0c0 (image=quay.io/ceph/ceph:v20, name=thirsty_newton, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:50:42 compute-0 systemd[1]: libpod-conmon-46f6d9822d7c8225815f91ca913c5306c19c5709e081424e6e20d13c629cf0c0.scope: Deactivated successfully.
Feb 01 14:50:42 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'mds_autoscaler'
Feb 01 14:50:42 compute-0 sudo[81211]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:42 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:50:42 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:42 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:50:42 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:42 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Feb 01 14:50:42 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3284079116' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Feb 01 14:50:42 compute-0 sudo[81323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:50:42 compute-0 sudo[81323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:42 compute-0 sudo[81323]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:43 compute-0 sudo[81349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 01 14:50:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee0db40140f0fca9d088cf434a9da7450598494cd3d190285961ccd3c3bf3d0b-merged.mount: Deactivated successfully.
Feb 01 14:50:43 compute-0 sudo[81349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:43 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'mirroring'
Feb 01 14:50:43 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'nfs'
Feb 01 14:50:43 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:43 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:43 compute-0 ceph-mon[75179]: Reconfiguring mgr.compute-0.viosrg (unknown last config time)...
Feb 01 14:50:43 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.viosrg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 01 14:50:43 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mgr services"} : dispatch
Feb 01 14:50:43 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:50:43 compute-0 ceph-mon[75179]: Reconfiguring daemon mgr.compute-0.viosrg on compute-0
Feb 01 14:50:43 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:43 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:43 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3284079116' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Feb 01 14:50:43 compute-0 ceph-mgr[75469]: [progress INFO root] Writing back 2 completed events
Feb 01 14:50:43 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'orchestrator'
Feb 01 14:50:43 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 01 14:50:43 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:43 compute-0 podman[81418]: 2026-02-01 14:50:43.544575117 +0000 UTC m=+0.083843614 container exec 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:50:43 compute-0 podman[81418]: 2026-02-01 14:50:43.618733793 +0000 UTC m=+0.158002240 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:50:43 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'osd_perf_query'
Feb 01 14:50:43 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:50:43 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'osd_support'
Feb 01 14:50:43 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'pg_autoscaler'
Feb 01 14:50:43 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'progress'
Feb 01 14:50:43 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'prometheus'
Feb 01 14:50:43 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Feb 01 14:50:43 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 01 14:50:43 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3284079116' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Feb 01 14:50:43 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Feb 01 14:50:43 compute-0 vigorous_austin[81251]: set require_min_compat_client to mimic
Feb 01 14:50:43 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Feb 01 14:50:43 compute-0 systemd[1]: libpod-3b5d59a454ec483504385c899dc7be032a4a417c8f7d50c9fc34b09dc620f3ae.scope: Deactivated successfully.
Feb 01 14:50:43 compute-0 podman[81228]: 2026-02-01 14:50:43.972719907 +0000 UTC m=+1.521591213 container died 3b5d59a454ec483504385c899dc7be032a4a417c8f7d50c9fc34b09dc620f3ae (image=quay.io/ceph/ceph:v20, name=vigorous_austin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7d4089cf7006ac7d282edbc91dd2533c10173438a6aff0fdd0220a982266cf8-merged.mount: Deactivated successfully.
Feb 01 14:50:44 compute-0 podman[81228]: 2026-02-01 14:50:44.009113324 +0000 UTC m=+1.557984580 container remove 3b5d59a454ec483504385c899dc7be032a4a417c8f7d50c9fc34b09dc620f3ae (image=quay.io/ceph/ceph:v20, name=vigorous_austin, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 01 14:50:44 compute-0 systemd[1]: libpod-conmon-3b5d59a454ec483504385c899dc7be032a4a417c8f7d50c9fc34b09dc620f3ae.scope: Deactivated successfully.
Feb 01 14:50:44 compute-0 sudo[81171]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:44 compute-0 sudo[81349]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:44 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:50:44 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:44 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:50:44 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:44 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:50:44 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:50:44 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 14:50:44 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:50:44 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 14:50:44 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:44 compute-0 sudo[81541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 14:50:44 compute-0 sudo[81541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:44 compute-0 sudo[81541]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:44 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'rbd_support'
Feb 01 14:50:44 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'rgw'
Feb 01 14:50:44 compute-0 sudo[81589]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgtddbcsxeigiaeojlujtetucqsngoye ; /usr/bin/python3'
Feb 01 14:50:44 compute-0 sudo[81589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:50:44 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:50:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:44 compute-0 ceph-mon[75179]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:50:44 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3284079116' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Feb 01 14:50:44 compute-0 ceph-mon[75179]: osdmap e3: 0 total, 0 up, 0 in
Feb 01 14:50:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:50:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:50:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:44 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'rook'
Feb 01 14:50:44 compute-0 python3[81591]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:50:44 compute-0 podman[81592]: 2026-02-01 14:50:44.660239777 +0000 UTC m=+0.056672309 container create 4a3473ca12fb31de3e9f468b3ba22a5f1b5a89c5f60f31f2ea25354202735568 (image=quay.io/ceph/ceph:v20, name=gracious_stonebraker, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 01 14:50:44 compute-0 systemd[1]: Started libpod-conmon-4a3473ca12fb31de3e9f468b3ba22a5f1b5a89c5f60f31f2ea25354202735568.scope.
Feb 01 14:50:44 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:44 compute-0 podman[81592]: 2026-02-01 14:50:44.636212686 +0000 UTC m=+0.032645298 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb34818f07695627e97bc0ca2001311f1586c8b57c6d8599af3159d971fec39/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb34818f07695627e97bc0ca2001311f1586c8b57c6d8599af3159d971fec39/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb34818f07695627e97bc0ca2001311f1586c8b57c6d8599af3159d971fec39/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:44 compute-0 podman[81592]: 2026-02-01 14:50:44.753595832 +0000 UTC m=+0.150028364 container init 4a3473ca12fb31de3e9f468b3ba22a5f1b5a89c5f60f31f2ea25354202735568 (image=quay.io/ceph/ceph:v20, name=gracious_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 01 14:50:44 compute-0 podman[81592]: 2026-02-01 14:50:44.758035713 +0000 UTC m=+0.154468255 container start 4a3473ca12fb31de3e9f468b3ba22a5f1b5a89c5f60f31f2ea25354202735568 (image=quay.io/ceph/ceph:v20, name=gracious_stonebraker, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:50:44 compute-0 podman[81592]: 2026-02-01 14:50:44.76130373 +0000 UTC m=+0.157736312 container attach 4a3473ca12fb31de3e9f468b3ba22a5f1b5a89c5f60f31f2ea25354202735568 (image=quay.io/ceph/ceph:v20, name=gracious_stonebraker, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Feb 01 14:50:45 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'selftest'
Feb 01 14:50:45 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'smb'
Feb 01 14:50:45 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:45 compute-0 sudo[81632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:50:45 compute-0 sudo[81632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:45 compute-0 sudo[81632]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:45 compute-0 sudo[81657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Feb 01 14:50:45 compute-0 sudo[81657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:50:45 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'snap_schedule'
Feb 01 14:50:45 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'stats'
Feb 01 14:50:45 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'status'
Feb 01 14:50:45 compute-0 sudo[81657]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 01 14:50:45 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 01 14:50:45 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 01 14:50:45 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 01 14:50:45 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:45 compute-0 ceph-mgr[75469]: [cephadm INFO root] Added host compute-0
Feb 01 14:50:45 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Added host compute-0
Feb 01 14:50:45 compute-0 ceph-mgr[75469]: [cephadm INFO root] Saving service mon spec with placement compute-0
Feb 01 14:50:45 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Feb 01 14:50:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:50:45 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:50:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb 01 14:50:45 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'telegraf'
Feb 01 14:50:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 14:50:45 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:50:45 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:45 compute-0 ceph-mgr[75469]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Feb 01 14:50:45 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Feb 01 14:50:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 14:50:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 01 14:50:45 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb 01 14:50:45 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:45 compute-0 ceph-mgr[75469]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Feb 01 14:50:45 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Feb 01 14:50:45 compute-0 ceph-mgr[75469]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Feb 01 14:50:45 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Feb 01 14:50:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Feb 01 14:50:45 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:45 compute-0 ceph-mgr[75469]: [progress INFO root] update: starting ev 7e382260-f02f-41d2-9f2f-3ca8953bdb76 (Updating mgr deployment (-1 -> 1))
Feb 01 14:50:45 compute-0 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.rdxlja from compute-0 -- ports [8765]
Feb 01 14:50:45 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.rdxlja from compute-0 -- ports [8765]
Feb 01 14:50:45 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:45 compute-0 gracious_stonebraker[81608]: Added host 'compute-0' with addr '192.168.122.100'
Feb 01 14:50:45 compute-0 gracious_stonebraker[81608]: Scheduled mon update...
Feb 01 14:50:45 compute-0 gracious_stonebraker[81608]: Scheduled mgr update...
Feb 01 14:50:45 compute-0 gracious_stonebraker[81608]: Scheduled osd.default_drive_group update...
Feb 01 14:50:45 compute-0 systemd[1]: libpod-4a3473ca12fb31de3e9f468b3ba22a5f1b5a89c5f60f31f2ea25354202735568.scope: Deactivated successfully.
Feb 01 14:50:45 compute-0 podman[81592]: 2026-02-01 14:50:45.673010029 +0000 UTC m=+1.069442571 container died 4a3473ca12fb31de3e9f468b3ba22a5f1b5a89c5f60f31f2ea25354202735568 (image=quay.io/ceph/ceph:v20, name=gracious_stonebraker, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 01 14:50:45 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'telemetry'
Feb 01 14:50:45 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:50:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fb34818f07695627e97bc0ca2001311f1586c8b57c6d8599af3159d971fec39-merged.mount: Deactivated successfully.
Feb 01 14:50:45 compute-0 sudo[81702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:50:45 compute-0 sudo[81702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:45 compute-0 sudo[81702]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:45 compute-0 podman[81592]: 2026-02-01 14:50:45.708619974 +0000 UTC m=+1.105052506 container remove 4a3473ca12fb31de3e9f468b3ba22a5f1b5a89c5f60f31f2ea25354202735568 (image=quay.io/ceph/ceph:v20, name=gracious_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:50:45 compute-0 systemd[1]: libpod-conmon-4a3473ca12fb31de3e9f468b3ba22a5f1b5a89c5f60f31f2ea25354202735568.scope: Deactivated successfully.
Feb 01 14:50:45 compute-0 sudo[81589]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:45 compute-0 sudo[81740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 rm-daemon --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --name mgr.compute-0.rdxlja --force --tcp-ports 8765
Feb 01 14:50:45 compute-0 sudo[81740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:45 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'test_orchestrator'
Feb 01 14:50:45 compute-0 sudo[81795]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lduvohtzarzakypjgxanqglossnmgcey ; /usr/bin/python3'
Feb 01 14:50:45 compute-0 sudo[81795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:50:45 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.rdxlja for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb 01 14:50:46 compute-0 ceph-mgr[80645]: mgr[py] Loading python module 'volumes'
Feb 01 14:50:46 compute-0 python3[81805]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:50:46 compute-0 podman[81840]: 2026-02-01 14:50:46.13732987 +0000 UTC m=+0.047169258 container create 8fb8df190c9de9052e89d511ea9a8701f823a6e747db036036a0f834d2a9942b (image=quay.io/ceph/ceph:v20, name=practical_wright, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:50:46 compute-0 podman[81833]: 2026-02-01 14:50:46.138629298 +0000 UTC m=+0.061370738 container died 7d94e38ece1e596fcae1a22687cbf142d9da3658707ee62f5feff59dab4a0686 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-rdxlja, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 01 14:50:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-193743347c78a8686387b1b48b1e59b45f5b5e6c93d270c616f42c5638d1a374-merged.mount: Deactivated successfully.
Feb 01 14:50:46 compute-0 podman[81833]: 2026-02-01 14:50:46.18664508 +0000 UTC m=+0.109386520 container remove 7d94e38ece1e596fcae1a22687cbf142d9da3658707ee62f5feff59dab4a0686 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-rdxlja, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 01 14:50:46 compute-0 bash[81833]: ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-rdxlja
Feb 01 14:50:46 compute-0 systemd[1]: Started libpod-conmon-8fb8df190c9de9052e89d511ea9a8701f823a6e747db036036a0f834d2a9942b.scope.
Feb 01 14:50:46 compute-0 systemd[1]: ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f@mgr.compute-0.rdxlja.service: Main process exited, code=exited, status=143/n/a
Feb 01 14:50:46 compute-0 podman[81840]: 2026-02-01 14:50:46.115688759 +0000 UTC m=+0.025528177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:50:46 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39af6efb4cbc10bd5352d0b2e720561e668a5e5caf366ed148f29396e3a8384b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39af6efb4cbc10bd5352d0b2e720561e668a5e5caf366ed148f29396e3a8384b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39af6efb4cbc10bd5352d0b2e720561e668a5e5caf366ed148f29396e3a8384b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:46 compute-0 podman[81840]: 2026-02-01 14:50:46.247847903 +0000 UTC m=+0.157687301 container init 8fb8df190c9de9052e89d511ea9a8701f823a6e747db036036a0f834d2a9942b (image=quay.io/ceph/ceph:v20, name=practical_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:50:46 compute-0 podman[81840]: 2026-02-01 14:50:46.25753379 +0000 UTC m=+0.167373208 container start 8fb8df190c9de9052e89d511ea9a8701f823a6e747db036036a0f834d2a9942b (image=quay.io/ceph/ceph:v20, name=practical_wright, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 01 14:50:46 compute-0 podman[81840]: 2026-02-01 14:50:46.261281791 +0000 UTC m=+0.171121209 container attach 8fb8df190c9de9052e89d511ea9a8701f823a6e747db036036a0f834d2a9942b (image=quay.io/ceph/ceph:v20, name=practical_wright, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:50:46 compute-0 systemd[1]: ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f@mgr.compute-0.rdxlja.service: Failed with result 'exit-code'.
Feb 01 14:50:46 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.rdxlja for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb 01 14:50:46 compute-0 systemd[1]: ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f@mgr.compute-0.rdxlja.service: Consumed 6.542s CPU time, 440.9M memory peak, read 0B from disk, written 165.0K to disk.
Feb 01 14:50:46 compute-0 systemd[1]: Reloading.
Feb 01 14:50:46 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:50:46 compute-0 systemd-rc-local-generator[81950]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:50:46 compute-0 systemd-sysv-generator[81953]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:50:46 compute-0 ceph-mon[75179]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:50:46 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:46 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:46 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:46 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:46 compute-0 ceph-mon[75179]: Added host compute-0
Feb 01 14:50:46 compute-0 ceph-mon[75179]: Saving service mon spec with placement compute-0
Feb 01 14:50:46 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:50:46 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:50:46 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:46 compute-0 ceph-mon[75179]: Saving service mgr spec with placement compute-0
Feb 01 14:50:46 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:46 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:46 compute-0 ceph-mon[75179]: Marking host: compute-0 for OSDSpec preview refresh.
Feb 01 14:50:46 compute-0 ceph-mon[75179]: Saving service osd.default_drive_group spec with placement compute-0
Feb 01 14:50:46 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:46 compute-0 ceph-mon[75179]: Removing daemon mgr.compute-0.rdxlja from compute-0 -- ports [8765]
Feb 01 14:50:46 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:46 compute-0 ceph-mon[75179]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:50:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb 01 14:50:46 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3217795026' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 01 14:50:46 compute-0 practical_wright[81875]: 
Feb 01 14:50:46 compute-0 practical_wright[81875]: {"fsid":"2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":46,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-02-01T14:49:58:117399+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-02-01T14:49:58.120892+0000","services":{}},"progress_events":{"7e382260-f02f-41d2-9f2f-3ca8953bdb76":{"message":"Updating mgr deployment (-1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Feb 01 14:50:46 compute-0 sudo[81740]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:46 compute-0 ceph-mgr[75469]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.rdxlja
Feb 01 14:50:46 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.rdxlja
Feb 01 14:50:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.rdxlja"} v 0)
Feb 01 14:50:46 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.rdxlja"} : dispatch
Feb 01 14:50:46 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.rdxlja"}]': finished
Feb 01 14:50:46 compute-0 systemd[1]: libpod-8fb8df190c9de9052e89d511ea9a8701f823a6e747db036036a0f834d2a9942b.scope: Deactivated successfully.
Feb 01 14:50:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 01 14:50:46 compute-0 podman[81840]: 2026-02-01 14:50:46.797452449 +0000 UTC m=+0.707291867 container died 8fb8df190c9de9052e89d511ea9a8701f823a6e747db036036a0f834d2a9942b (image=quay.io/ceph/ceph:v20, name=practical_wright, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 01 14:50:46 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:46 compute-0 ceph-mgr[75469]: [progress INFO root] complete: finished ev 7e382260-f02f-41d2-9f2f-3ca8953bdb76 (Updating mgr deployment (-1 -> 1))
Feb 01 14:50:46 compute-0 ceph-mgr[75469]: [progress INFO root] Completed event 7e382260-f02f-41d2-9f2f-3ca8953bdb76 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Feb 01 14:50:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 01 14:50:46 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-39af6efb4cbc10bd5352d0b2e720561e668a5e5caf366ed148f29396e3a8384b-merged.mount: Deactivated successfully.
Feb 01 14:50:46 compute-0 podman[81840]: 2026-02-01 14:50:46.843556205 +0000 UTC m=+0.753395603 container remove 8fb8df190c9de9052e89d511ea9a8701f823a6e747db036036a0f834d2a9942b (image=quay.io/ceph/ceph:v20, name=practical_wright, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:50:46 compute-0 systemd[1]: libpod-conmon-8fb8df190c9de9052e89d511ea9a8701f823a6e747db036036a0f834d2a9942b.scope: Deactivated successfully.
Feb 01 14:50:46 compute-0 sudo[81795]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:46 compute-0 sudo[81978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 14:50:46 compute-0 sudo[81978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:46 compute-0 sudo[81978]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:46 compute-0 sudo[82009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:50:46 compute-0 sudo[82009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:46 compute-0 sudo[82009]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:46 compute-0 sudo[82034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 01 14:50:46 compute-0 sudo[82034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:47 compute-0 podman[82104]: 2026-02-01 14:50:47.35969935 +0000 UTC m=+0.074710824 container exec 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:50:47 compute-0 podman[82104]: 2026-02-01 14:50:47.449625713 +0000 UTC m=+0.164637107 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 01 14:50:47 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3217795026' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 01 14:50:47 compute-0 ceph-mon[75179]: Removing key for mgr.compute-0.rdxlja
Feb 01 14:50:47 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.rdxlja"} : dispatch
Feb 01 14:50:47 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.rdxlja"}]': finished
Feb 01 14:50:47 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:47 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:47 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:50:47 compute-0 sudo[82034]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:50:47 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:50:47 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:50:47 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:50:47 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:50:47 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:50:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 14:50:47 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:50:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 14:50:47 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 14:50:47 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:50:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 14:50:47 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:50:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:50:47 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:50:47 compute-0 sudo[82198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:50:47 compute-0 sudo[82198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:47 compute-0 sudo[82198]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:48 compute-0 sudo[82223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 14:50:48 compute-0 sudo[82223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:48 compute-0 podman[82261]: 2026-02-01 14:50:48.29368855 +0000 UTC m=+0.051442694 container create def19604c27e80895316e0109b2cd1281633886e369c3662d1dad6a864cab2de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 01 14:50:48 compute-0 systemd[1]: Started libpod-conmon-def19604c27e80895316e0109b2cd1281633886e369c3662d1dad6a864cab2de.scope.
Feb 01 14:50:48 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:48 compute-0 podman[82261]: 2026-02-01 14:50:48.275975145 +0000 UTC m=+0.033729289 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:50:48 compute-0 podman[82261]: 2026-02-01 14:50:48.370465294 +0000 UTC m=+0.128219398 container init def19604c27e80895316e0109b2cd1281633886e369c3662d1dad6a864cab2de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:48 compute-0 podman[82261]: 2026-02-01 14:50:48.379389968 +0000 UTC m=+0.137144082 container start def19604c27e80895316e0109b2cd1281633886e369c3662d1dad6a864cab2de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:50:48 compute-0 podman[82261]: 2026-02-01 14:50:48.38349487 +0000 UTC m=+0.141248974 container attach def19604c27e80895316e0109b2cd1281633886e369c3662d1dad6a864cab2de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 01 14:50:48 compute-0 happy_gates[82277]: 167 167
Feb 01 14:50:48 compute-0 systemd[1]: libpod-def19604c27e80895316e0109b2cd1281633886e369c3662d1dad6a864cab2de.scope: Deactivated successfully.
Feb 01 14:50:48 compute-0 conmon[82277]: conmon def19604c27e80895316 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-def19604c27e80895316e0109b2cd1281633886e369c3662d1dad6a864cab2de.scope/container/memory.events
Feb 01 14:50:48 compute-0 podman[82261]: 2026-02-01 14:50:48.38518747 +0000 UTC m=+0.142941604 container died def19604c27e80895316e0109b2cd1281633886e369c3662d1dad6a864cab2de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 01 14:50:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-6705f2502e4c1c7b7d9960725e80e33c24ee1210f6b9a537ca48d9ade2c83495-merged.mount: Deactivated successfully.
Feb 01 14:50:48 compute-0 podman[82261]: 2026-02-01 14:50:48.424587507 +0000 UTC m=+0.182341621 container remove def19604c27e80895316e0109b2cd1281633886e369c3662d1dad6a864cab2de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:50:48 compute-0 systemd[1]: libpod-conmon-def19604c27e80895316e0109b2cd1281633886e369c3662d1dad6a864cab2de.scope: Deactivated successfully.
Feb 01 14:50:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:50:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:50:48 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:50:48 compute-0 ceph-mgr[75469]: [progress INFO root] Writing back 3 completed events
Feb 01 14:50:48 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 01 14:50:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:50:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:50:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:50:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:50:48 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:48 compute-0 podman[82300]: 2026-02-01 14:50:48.591752277 +0000 UTC m=+0.057943637 container create 364d8d0ce6c74190f6d0bcb193b9a4f4752c23eab04f57c7a629db135d9311ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 01 14:50:48 compute-0 systemd[1]: Started libpod-conmon-364d8d0ce6c74190f6d0bcb193b9a4f4752c23eab04f57c7a629db135d9311ba.scope.
Feb 01 14:50:48 compute-0 podman[82300]: 2026-02-01 14:50:48.563187161 +0000 UTC m=+0.029378601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:50:48 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebb1e1be19fa708a493b651aac111b3a261636da2173ab9e6ec4b01607569e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebb1e1be19fa708a493b651aac111b3a261636da2173ab9e6ec4b01607569e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebb1e1be19fa708a493b651aac111b3a261636da2173ab9e6ec4b01607569e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebb1e1be19fa708a493b651aac111b3a261636da2173ab9e6ec4b01607569e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebb1e1be19fa708a493b651aac111b3a261636da2173ab9e6ec4b01607569e9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:48 compute-0 podman[82300]: 2026-02-01 14:50:48.69049811 +0000 UTC m=+0.156689460 container init 364d8d0ce6c74190f6d0bcb193b9a4f4752c23eab04f57c7a629db135d9311ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:50:48 compute-0 podman[82300]: 2026-02-01 14:50:48.697446256 +0000 UTC m=+0.163637616 container start 364d8d0ce6c74190f6d0bcb193b9a4f4752c23eab04f57c7a629db135d9311ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:50:48 compute-0 podman[82300]: 2026-02-01 14:50:48.701540467 +0000 UTC m=+0.167731817 container attach 364d8d0ce6c74190f6d0bcb193b9a4f4752c23eab04f57c7a629db135d9311ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:50:48 compute-0 ceph-mon[75179]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:50:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:50:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:50:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:50:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:50:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:50:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:50:49 compute-0 magical_stonebraker[82316]: --> passed data devices: 0 physical, 3 LVM
Feb 01 14:50:49 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:50:49 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:50:49 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e67ca44a-7e61-43f9-bf2b-cf15de50303a
Feb 01 14:50:49 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:50:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a"} v 0)
Feb 01 14:50:49 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/58118942' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a"} : dispatch
Feb 01 14:50:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Feb 01 14:50:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 01 14:50:49 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/58118942' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a"}]': finished
Feb 01 14:50:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Feb 01 14:50:49 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Feb 01 14:50:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 01 14:50:49 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 01 14:50:49 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 01 14:50:50 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Feb 01 14:50:50 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Feb 01 14:50:50 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 01 14:50:50 compute-0 lvm[82408]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 14:50:50 compute-0 lvm[82408]: VG ceph_vg0 finished
Feb 01 14:50:50 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb 01 14:50:50 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Feb 01 14:50:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:50:50 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:50:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb 01 14:50:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4247501042' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb 01 14:50:50 compute-0 magical_stonebraker[82316]:  stderr: got monmap epoch 1
Feb 01 14:50:50 compute-0 magical_stonebraker[82316]: --> Creating keyring file for osd.0
Feb 01 14:50:50 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Feb 01 14:50:50 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Feb 01 14:50:50 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid e67ca44a-7e61-43f9-bf2b-cf15de50303a --setuser ceph --setgroup ceph
Feb 01 14:50:50 compute-0 ceph-mon[75179]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:50:50 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/58118942' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a"} : dispatch
Feb 01 14:50:50 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/58118942' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a"}]': finished
Feb 01 14:50:50 compute-0 ceph-mon[75179]: osdmap e4: 1 total, 0 up, 1 in
Feb 01 14:50:50 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 01 14:50:50 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/4247501042' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb 01 14:50:51 compute-0 magical_stonebraker[82316]:  stderr: 2026-02-01T14:50:50.681+0000 7f8440e778c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Feb 01 14:50:51 compute-0 magical_stonebraker[82316]:  stderr: 2026-02-01T14:50:50.706+0000 7f8440e778c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Feb 01 14:50:51 compute-0 magical_stonebraker[82316]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Feb 01 14:50:51 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 01 14:50:51 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Feb 01 14:50:51 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb 01 14:50:51 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Feb 01 14:50:51 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 01 14:50:51 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 01 14:50:51 compute-0 magical_stonebraker[82316]: --> ceph-volume lvm activate successful for osd ID: 0
Feb 01 14:50:51 compute-0 magical_stonebraker[82316]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Feb 01 14:50:51 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:50:51 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:50:51 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:50:51 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new fd39fcf7-28de-4953-80ed-edf6e0aa6fd0
Feb 01 14:50:51 compute-0 ceph-mon[75179]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Feb 01 14:50:51 compute-0 ceph-mon[75179]: log_channel(cluster) log [INF] : Cluster is now healthy
Feb 01 14:50:51 compute-0 ceph-mon[75179]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Feb 01 14:50:51 compute-0 ceph-mon[75179]: Cluster is now healthy
Feb 01 14:50:52 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0"} v 0)
Feb 01 14:50:52 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3376588258' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0"} : dispatch
Feb 01 14:50:52 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Feb 01 14:50:52 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 01 14:50:52 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3376588258' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0"}]': finished
Feb 01 14:50:52 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Feb 01 14:50:52 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Feb 01 14:50:52 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 01 14:50:52 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 01 14:50:52 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 01 14:50:52 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 01 14:50:52 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 01 14:50:52 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 01 14:50:52 compute-0 lvm[83352]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 14:50:52 compute-0 lvm[83352]: VG ceph_vg1 finished
Feb 01 14:50:52 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Feb 01 14:50:52 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Feb 01 14:50:52 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb 01 14:50:52 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb 01 14:50:52 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Feb 01 14:50:52 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:50:52 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb 01 14:50:52 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1605107193' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb 01 14:50:52 compute-0 magical_stonebraker[82316]:  stderr: got monmap epoch 1
Feb 01 14:50:52 compute-0 magical_stonebraker[82316]: --> Creating keyring file for osd.1
Feb 01 14:50:52 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Feb 01 14:50:52 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Feb 01 14:50:52 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid fd39fcf7-28de-4953-80ed-edf6e0aa6fd0 --setuser ceph --setgroup ceph
Feb 01 14:50:52 compute-0 ceph-mon[75179]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:50:52 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3376588258' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0"} : dispatch
Feb 01 14:50:52 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3376588258' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0"}]': finished
Feb 01 14:50:52 compute-0 ceph-mon[75179]: osdmap e5: 2 total, 0 up, 2 in
Feb 01 14:50:52 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 01 14:50:52 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 01 14:50:52 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1605107193' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb 01 14:50:53 compute-0 magical_stonebraker[82316]:  stderr: 2026-02-01T14:50:52.903+0000 7f06f561e8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Feb 01 14:50:53 compute-0 magical_stonebraker[82316]:  stderr: 2026-02-01T14:50:52.932+0000 7f06f561e8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Feb 01 14:50:53 compute-0 magical_stonebraker[82316]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Feb 01 14:50:53 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:50:53 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 01 14:50:53 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb 01 14:50:53 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb 01 14:50:53 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb 01 14:50:53 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb 01 14:50:53 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 01 14:50:53 compute-0 magical_stonebraker[82316]: --> ceph-volume lvm activate successful for osd ID: 1
Feb 01 14:50:53 compute-0 magical_stonebraker[82316]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Feb 01 14:50:53 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:50:53 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:50:53 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 7fabf513-99fe-4b35-b072-3f0e487337b7
Feb 01 14:50:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "7fabf513-99fe-4b35-b072-3f0e487337b7"} v 0)
Feb 01 14:50:54 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2133326794' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "7fabf513-99fe-4b35-b072-3f0e487337b7"} : dispatch
Feb 01 14:50:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Feb 01 14:50:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 01 14:50:54 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2133326794' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7fabf513-99fe-4b35-b072-3f0e487337b7"}]': finished
Feb 01 14:50:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Feb 01 14:50:54 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Feb 01 14:50:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 01 14:50:54 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 01 14:50:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 01 14:50:54 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 01 14:50:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 01 14:50:54 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 01 14:50:54 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:50:54 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 01 14:50:54 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 01 14:50:54 compute-0 lvm[84296]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 14:50:54 compute-0 lvm[84296]: VG ceph_vg2 finished
Feb 01 14:50:54 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Feb 01 14:50:54 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Feb 01 14:50:54 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb 01 14:50:54 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb 01 14:50:54 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Feb 01 14:50:54 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:50:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb 01 14:50:54 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1472763208' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb 01 14:50:54 compute-0 magical_stonebraker[82316]:  stderr: got monmap epoch 1
Feb 01 14:50:54 compute-0 magical_stonebraker[82316]: --> Creating keyring file for osd.2
Feb 01 14:50:54 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Feb 01 14:50:54 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Feb 01 14:50:54 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 7fabf513-99fe-4b35-b072-3f0e487337b7 --setuser ceph --setgroup ceph
Feb 01 14:50:54 compute-0 ceph-mon[75179]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:50:54 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2133326794' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "7fabf513-99fe-4b35-b072-3f0e487337b7"} : dispatch
Feb 01 14:50:54 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2133326794' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7fabf513-99fe-4b35-b072-3f0e487337b7"}]': finished
Feb 01 14:50:54 compute-0 ceph-mon[75179]: osdmap e6: 3 total, 0 up, 3 in
Feb 01 14:50:54 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 01 14:50:54 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 01 14:50:54 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:50:54 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1472763208' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb 01 14:50:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:50:55 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:50:55 compute-0 magical_stonebraker[82316]:  stderr: 2026-02-01T14:50:54.985+0000 7fb1207cc8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Feb 01 14:50:55 compute-0 magical_stonebraker[82316]:  stderr: 2026-02-01T14:50:55.005+0000 7fb1207cc8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Feb 01 14:50:55 compute-0 magical_stonebraker[82316]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Feb 01 14:50:55 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 01 14:50:55 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Feb 01 14:50:55 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb 01 14:50:55 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Feb 01 14:50:55 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb 01 14:50:55 compute-0 magical_stonebraker[82316]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 01 14:50:55 compute-0 magical_stonebraker[82316]: --> ceph-volume lvm activate successful for osd ID: 2
Feb 01 14:50:55 compute-0 magical_stonebraker[82316]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Feb 01 14:50:56 compute-0 systemd[1]: libpod-364d8d0ce6c74190f6d0bcb193b9a4f4752c23eab04f57c7a629db135d9311ba.scope: Deactivated successfully.
Feb 01 14:50:56 compute-0 systemd[1]: libpod-364d8d0ce6c74190f6d0bcb193b9a4f4752c23eab04f57c7a629db135d9311ba.scope: Consumed 5.764s CPU time.
Feb 01 14:50:56 compute-0 podman[85212]: 2026-02-01 14:50:56.063953791 +0000 UTC m=+0.031461752 container died 364d8d0ce6c74190f6d0bcb193b9a4f4752c23eab04f57c7a629db135d9311ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:50:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-cebb1e1be19fa708a493b651aac111b3a261636da2173ab9e6ec4b01607569e9-merged.mount: Deactivated successfully.
Feb 01 14:50:56 compute-0 podman[85212]: 2026-02-01 14:50:56.10681247 +0000 UTC m=+0.074320431 container remove 364d8d0ce6c74190f6d0bcb193b9a4f4752c23eab04f57c7a629db135d9311ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 01 14:50:56 compute-0 systemd[1]: libpod-conmon-364d8d0ce6c74190f6d0bcb193b9a4f4752c23eab04f57c7a629db135d9311ba.scope: Deactivated successfully.
Feb 01 14:50:56 compute-0 sudo[82223]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:56 compute-0 sudo[85228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:50:56 compute-0 sudo[85228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:56 compute-0 sudo[85228]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:56 compute-0 sudo[85253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 14:50:56 compute-0 sudo[85253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:56 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:50:56 compute-0 podman[85291]: 2026-02-01 14:50:56.620134462 +0000 UTC m=+0.055981078 container create 49fb31cf21f1133ec79bb2333738c18163e60d162586f7bf03d6c825ce9b7fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 01 14:50:56 compute-0 systemd[1]: Started libpod-conmon-49fb31cf21f1133ec79bb2333738c18163e60d162586f7bf03d6c825ce9b7fc1.scope.
Feb 01 14:50:56 compute-0 podman[85291]: 2026-02-01 14:50:56.592324019 +0000 UTC m=+0.028170685 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:50:56 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:56 compute-0 podman[85291]: 2026-02-01 14:50:56.722508594 +0000 UTC m=+0.158355200 container init 49fb31cf21f1133ec79bb2333738c18163e60d162586f7bf03d6c825ce9b7fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jepsen, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 01 14:50:56 compute-0 podman[85291]: 2026-02-01 14:50:56.730775519 +0000 UTC m=+0.166622095 container start 49fb31cf21f1133ec79bb2333738c18163e60d162586f7bf03d6c825ce9b7fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jepsen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:56 compute-0 podman[85291]: 2026-02-01 14:50:56.734443358 +0000 UTC m=+0.170289984 container attach 49fb31cf21f1133ec79bb2333738c18163e60d162586f7bf03d6c825ce9b7fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jepsen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 01 14:50:56 compute-0 nostalgic_jepsen[85308]: 167 167
Feb 01 14:50:56 compute-0 systemd[1]: libpod-49fb31cf21f1133ec79bb2333738c18163e60d162586f7bf03d6c825ce9b7fc1.scope: Deactivated successfully.
Feb 01 14:50:56 compute-0 podman[85291]: 2026-02-01 14:50:56.738053885 +0000 UTC m=+0.173900501 container died 49fb31cf21f1133ec79bb2333738c18163e60d162586f7bf03d6c825ce9b7fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jepsen, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:50:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c428f6188108df1ec401a9f80a8843fe3c1b814b2e04875658ff4c10cf42b49c-merged.mount: Deactivated successfully.
Feb 01 14:50:56 compute-0 podman[85291]: 2026-02-01 14:50:56.778931075 +0000 UTC m=+0.214777691 container remove 49fb31cf21f1133ec79bb2333738c18163e60d162586f7bf03d6c825ce9b7fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jepsen, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 01 14:50:56 compute-0 systemd[1]: libpod-conmon-49fb31cf21f1133ec79bb2333738c18163e60d162586f7bf03d6c825ce9b7fc1.scope: Deactivated successfully.
Feb 01 14:50:56 compute-0 ceph-mon[75179]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:50:56 compute-0 podman[85332]: 2026-02-01 14:50:56.958015669 +0000 UTC m=+0.062883874 container create 8e363c0feb27131dd6cdd769682d04f8322ce6937a6e784bcfe8bc2c7fd27df8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:50:57 compute-0 systemd[1]: Started libpod-conmon-8e363c0feb27131dd6cdd769682d04f8322ce6937a6e784bcfe8bc2c7fd27df8.scope.
Feb 01 14:50:57 compute-0 podman[85332]: 2026-02-01 14:50:56.932010319 +0000 UTC m=+0.036878594 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:50:57 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfc81ceead40937e7ac11ea73b58a177204afbe2da2f6d3e3093e84464c8ac4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfc81ceead40937e7ac11ea73b58a177204afbe2da2f6d3e3093e84464c8ac4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfc81ceead40937e7ac11ea73b58a177204afbe2da2f6d3e3093e84464c8ac4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfc81ceead40937e7ac11ea73b58a177204afbe2da2f6d3e3093e84464c8ac4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:57 compute-0 podman[85332]: 2026-02-01 14:50:57.052124776 +0000 UTC m=+0.156993001 container init 8e363c0feb27131dd6cdd769682d04f8322ce6937a6e784bcfe8bc2c7fd27df8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 01 14:50:57 compute-0 podman[85332]: 2026-02-01 14:50:57.060804983 +0000 UTC m=+0.165673178 container start 8e363c0feb27131dd6cdd769682d04f8322ce6937a6e784bcfe8bc2c7fd27df8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Feb 01 14:50:57 compute-0 podman[85332]: 2026-02-01 14:50:57.063469502 +0000 UTC m=+0.168337707 container attach 8e363c0feb27131dd6cdd769682d04f8322ce6937a6e784bcfe8bc2c7fd27df8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 01 14:50:57 compute-0 cool_merkle[85349]: {
Feb 01 14:50:57 compute-0 cool_merkle[85349]:     "0": [
Feb 01 14:50:57 compute-0 cool_merkle[85349]:         {
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "devices": [
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "/dev/loop3"
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             ],
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "lv_name": "ceph_lv0",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "lv_size": "21470642176",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "name": "ceph_lv0",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "tags": {
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.cluster_name": "ceph",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.crush_device_class": "",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.encrypted": "0",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.objectstore": "bluestore",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.osd_id": "0",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.type": "block",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.vdo": "0",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.with_tpm": "0"
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             },
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "type": "block",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "vg_name": "ceph_vg0"
Feb 01 14:50:57 compute-0 cool_merkle[85349]:         }
Feb 01 14:50:57 compute-0 cool_merkle[85349]:     ],
Feb 01 14:50:57 compute-0 cool_merkle[85349]:     "1": [
Feb 01 14:50:57 compute-0 cool_merkle[85349]:         {
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "devices": [
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "/dev/loop4"
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             ],
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "lv_name": "ceph_lv1",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "lv_size": "21470642176",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "name": "ceph_lv1",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "tags": {
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.cluster_name": "ceph",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.crush_device_class": "",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.encrypted": "0",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.objectstore": "bluestore",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.osd_id": "1",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.type": "block",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.vdo": "0",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.with_tpm": "0"
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             },
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "type": "block",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "vg_name": "ceph_vg1"
Feb 01 14:50:57 compute-0 cool_merkle[85349]:         }
Feb 01 14:50:57 compute-0 cool_merkle[85349]:     ],
Feb 01 14:50:57 compute-0 cool_merkle[85349]:     "2": [
Feb 01 14:50:57 compute-0 cool_merkle[85349]:         {
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "devices": [
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "/dev/loop5"
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             ],
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "lv_name": "ceph_lv2",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "lv_size": "21470642176",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "name": "ceph_lv2",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "tags": {
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.cluster_name": "ceph",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.crush_device_class": "",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.encrypted": "0",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.objectstore": "bluestore",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.osd_id": "2",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.type": "block",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.vdo": "0",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:                 "ceph.with_tpm": "0"
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             },
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "type": "block",
Feb 01 14:50:57 compute-0 cool_merkle[85349]:             "vg_name": "ceph_vg2"
Feb 01 14:50:57 compute-0 cool_merkle[85349]:         }
Feb 01 14:50:57 compute-0 cool_merkle[85349]:     ]
Feb 01 14:50:57 compute-0 cool_merkle[85349]: }
Feb 01 14:50:57 compute-0 systemd[1]: libpod-8e363c0feb27131dd6cdd769682d04f8322ce6937a6e784bcfe8bc2c7fd27df8.scope: Deactivated successfully.
Feb 01 14:50:57 compute-0 podman[85332]: 2026-02-01 14:50:57.386964082 +0000 UTC m=+0.491832327 container died 8e363c0feb27131dd6cdd769682d04f8322ce6937a6e784bcfe8bc2c7fd27df8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_merkle, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 01 14:50:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bfc81ceead40937e7ac11ea73b58a177204afbe2da2f6d3e3093e84464c8ac4-merged.mount: Deactivated successfully.
Feb 01 14:50:57 compute-0 podman[85332]: 2026-02-01 14:50:57.443929479 +0000 UTC m=+0.548797654 container remove 8e363c0feb27131dd6cdd769682d04f8322ce6937a6e784bcfe8bc2c7fd27df8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_merkle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:50:57 compute-0 systemd[1]: libpod-conmon-8e363c0feb27131dd6cdd769682d04f8322ce6937a6e784bcfe8bc2c7fd27df8.scope: Deactivated successfully.
Feb 01 14:50:57 compute-0 sudo[85253]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:57 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Feb 01 14:50:57 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Feb 01 14:50:57 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:50:57 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:50:57 compute-0 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Feb 01 14:50:57 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Feb 01 14:50:57 compute-0 sudo[85371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:50:57 compute-0 sudo[85371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:57 compute-0 sudo[85371]: pam_unix(sudo:session): session closed for user root
Feb 01 14:50:57 compute-0 sudo[85396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:50:57 compute-0 sudo[85396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:50:57 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:50:57 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Feb 01 14:50:57 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:50:58 compute-0 podman[85461]: 2026-02-01 14:50:58.001100959 +0000 UTC m=+0.063783890 container create 2eda653c9e720a629504e5478da3712119c071358346b83d108cf58b17e7b3a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 01 14:50:58 compute-0 systemd[1]: Started libpod-conmon-2eda653c9e720a629504e5478da3712119c071358346b83d108cf58b17e7b3a5.scope.
Feb 01 14:50:58 compute-0 podman[85461]: 2026-02-01 14:50:57.972931675 +0000 UTC m=+0.035614636 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:50:58 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:58 compute-0 podman[85461]: 2026-02-01 14:50:58.081979695 +0000 UTC m=+0.144662596 container init 2eda653c9e720a629504e5478da3712119c071358346b83d108cf58b17e7b3a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 01 14:50:58 compute-0 podman[85461]: 2026-02-01 14:50:58.091128106 +0000 UTC m=+0.153811017 container start 2eda653c9e720a629504e5478da3712119c071358346b83d108cf58b17e7b3a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 01 14:50:58 compute-0 clever_wozniak[85477]: 167 167
Feb 01 14:50:58 compute-0 podman[85461]: 2026-02-01 14:50:58.09465269 +0000 UTC m=+0.157335611 container attach 2eda653c9e720a629504e5478da3712119c071358346b83d108cf58b17e7b3a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 01 14:50:58 compute-0 systemd[1]: libpod-2eda653c9e720a629504e5478da3712119c071358346b83d108cf58b17e7b3a5.scope: Deactivated successfully.
Feb 01 14:50:58 compute-0 podman[85461]: 2026-02-01 14:50:58.095573227 +0000 UTC m=+0.158256128 container died 2eda653c9e720a629504e5478da3712119c071358346b83d108cf58b17e7b3a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:50:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2c4a5bb6a9b4d0dc3cf9a29a524c86c7c8db6cffa6ce0c6909942d79ce76c7f-merged.mount: Deactivated successfully.
Feb 01 14:50:58 compute-0 podman[85461]: 2026-02-01 14:50:58.13111742 +0000 UTC m=+0.193800321 container remove 2eda653c9e720a629504e5478da3712119c071358346b83d108cf58b17e7b3a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 01 14:50:58 compute-0 systemd[1]: libpod-conmon-2eda653c9e720a629504e5478da3712119c071358346b83d108cf58b17e7b3a5.scope: Deactivated successfully.
Feb 01 14:50:58 compute-0 podman[85506]: 2026-02-01 14:50:58.332878735 +0000 UTC m=+0.045371205 container create 91c952fcbe797605ad012b9b7fa4f855a74aea6971613b4e839176a906e3b48e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 01 14:50:58 compute-0 systemd[1]: Started libpod-conmon-91c952fcbe797605ad012b9b7fa4f855a74aea6971613b4e839176a906e3b48e.scope.
Feb 01 14:50:58 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86253a548ac47ba5e192c29c568fb2d7710b5ba1a6649ac45ba952dcbaf741f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86253a548ac47ba5e192c29c568fb2d7710b5ba1a6649ac45ba952dcbaf741f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86253a548ac47ba5e192c29c568fb2d7710b5ba1a6649ac45ba952dcbaf741f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86253a548ac47ba5e192c29c568fb2d7710b5ba1a6649ac45ba952dcbaf741f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86253a548ac47ba5e192c29c568fb2d7710b5ba1a6649ac45ba952dcbaf741f0/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:58 compute-0 podman[85506]: 2026-02-01 14:50:58.308092251 +0000 UTC m=+0.020584791 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:50:58 compute-0 podman[85506]: 2026-02-01 14:50:58.411750041 +0000 UTC m=+0.124242511 container init 91c952fcbe797605ad012b9b7fa4f855a74aea6971613b4e839176a906e3b48e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 01 14:50:58 compute-0 podman[85506]: 2026-02-01 14:50:58.423320233 +0000 UTC m=+0.135812663 container start 91c952fcbe797605ad012b9b7fa4f855a74aea6971613b4e839176a906e3b48e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate-test, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:50:58 compute-0 podman[85506]: 2026-02-01 14:50:58.427004073 +0000 UTC m=+0.139496573 container attach 91c952fcbe797605ad012b9b7fa4f855a74aea6971613b4e839176a906e3b48e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate-test, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:50:58 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:50:58 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate-test[85522]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Feb 01 14:50:58 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate-test[85522]:                             [--no-systemd] [--no-tmpfs]
Feb 01 14:50:58 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate-test[85522]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb 01 14:50:58 compute-0 systemd[1]: libpod-91c952fcbe797605ad012b9b7fa4f855a74aea6971613b4e839176a906e3b48e.scope: Deactivated successfully.
Feb 01 14:50:58 compute-0 podman[85506]: 2026-02-01 14:50:58.650738428 +0000 UTC m=+0.363230868 container died 91c952fcbe797605ad012b9b7fa4f855a74aea6971613b4e839176a906e3b48e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate-test, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:50:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-86253a548ac47ba5e192c29c568fb2d7710b5ba1a6649ac45ba952dcbaf741f0-merged.mount: Deactivated successfully.
Feb 01 14:50:58 compute-0 podman[85506]: 2026-02-01 14:50:58.693322199 +0000 UTC m=+0.405814639 container remove 91c952fcbe797605ad012b9b7fa4f855a74aea6971613b4e839176a906e3b48e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 01 14:50:58 compute-0 systemd[1]: libpod-conmon-91c952fcbe797605ad012b9b7fa4f855a74aea6971613b4e839176a906e3b48e.scope: Deactivated successfully.
Feb 01 14:50:58 compute-0 systemd[1]: Reloading.
Feb 01 14:50:58 compute-0 ceph-mon[75179]: Deploying daemon osd.0 on compute-0
Feb 01 14:50:58 compute-0 ceph-mon[75179]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:50:58 compute-0 systemd-sysv-generator[85587]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:50:58 compute-0 systemd-rc-local-generator[85583]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:50:59 compute-0 systemd[1]: Reloading.
Feb 01 14:50:59 compute-0 systemd-sysv-generator[85626]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:50:59 compute-0 systemd-rc-local-generator[85623]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:50:59 compute-0 systemd[1]: Starting Ceph osd.0 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb 01 14:50:59 compute-0 podman[85679]: 2026-02-01 14:50:59.512091246 +0000 UTC m=+0.033467382 container create 8a5b4dc7b3a0fca17e74c9a3341a38d689cdaa52941eb529d598bb3e6e43e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 01 14:50:59 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:50:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bad0bd724e1c9ba89042f9a0b57510546bdca0779e6d1bec40040a428fd9c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bad0bd724e1c9ba89042f9a0b57510546bdca0779e6d1bec40040a428fd9c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bad0bd724e1c9ba89042f9a0b57510546bdca0779e6d1bec40040a428fd9c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bad0bd724e1c9ba89042f9a0b57510546bdca0779e6d1bec40040a428fd9c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bad0bd724e1c9ba89042f9a0b57510546bdca0779e6d1bec40040a428fd9c2/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Feb 01 14:50:59 compute-0 podman[85679]: 2026-02-01 14:50:59.580934785 +0000 UTC m=+0.102310961 container init 8a5b4dc7b3a0fca17e74c9a3341a38d689cdaa52941eb529d598bb3e6e43e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Feb 01 14:50:59 compute-0 podman[85679]: 2026-02-01 14:50:59.589019834 +0000 UTC m=+0.110395980 container start 8a5b4dc7b3a0fca17e74c9a3341a38d689cdaa52941eb529d598bb3e6e43e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:50:59 compute-0 podman[85679]: 2026-02-01 14:50:59.49633379 +0000 UTC m=+0.017709956 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:50:59 compute-0 podman[85679]: 2026-02-01 14:50:59.592683173 +0000 UTC m=+0.114059359 container attach 8a5b4dc7b3a0fca17e74c9a3341a38d689cdaa52941eb529d598bb3e6e43e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 01 14:50:59 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:50:59 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:50:59 compute-0 bash[85679]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:50:59 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:50:59 compute-0 bash[85679]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:51:00 compute-0 lvm[85775]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 14:51:00 compute-0 lvm[85775]: VG ceph_vg0 finished
Feb 01 14:51:00 compute-0 lvm[85778]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 14:51:00 compute-0 lvm[85778]: VG ceph_vg1 finished
Feb 01 14:51:00 compute-0 lvm[85780]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 14:51:00 compute-0 lvm[85780]: VG ceph_vg2 finished
Feb 01 14:51:00 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 01 14:51:00 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:51:00 compute-0 bash[85679]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 01 14:51:00 compute-0 bash[85679]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:51:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:51:00 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:51:00 compute-0 bash[85679]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:51:00 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:51:00 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 01 14:51:00 compute-0 bash[85679]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 01 14:51:00 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Feb 01 14:51:00 compute-0 bash[85679]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Feb 01 14:51:00 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb 01 14:51:00 compute-0 bash[85679]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb 01 14:51:00 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Feb 01 14:51:00 compute-0 bash[85679]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Feb 01 14:51:00 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 01 14:51:00 compute-0 bash[85679]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 01 14:51:00 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 01 14:51:00 compute-0 bash[85679]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 01 14:51:00 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate[85692]: --> ceph-volume lvm activate successful for osd ID: 0
Feb 01 14:51:00 compute-0 bash[85679]: --> ceph-volume lvm activate successful for osd ID: 0
Feb 01 14:51:00 compute-0 systemd[1]: libpod-8a5b4dc7b3a0fca17e74c9a3341a38d689cdaa52941eb529d598bb3e6e43e05b.scope: Deactivated successfully.
Feb 01 14:51:00 compute-0 podman[85679]: 2026-02-01 14:51:00.642626877 +0000 UTC m=+1.164003013 container died 8a5b4dc7b3a0fca17e74c9a3341a38d689cdaa52941eb529d598bb3e6e43e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Feb 01 14:51:00 compute-0 systemd[1]: libpod-8a5b4dc7b3a0fca17e74c9a3341a38d689cdaa52941eb529d598bb3e6e43e05b.scope: Consumed 1.241s CPU time.
Feb 01 14:51:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0bad0bd724e1c9ba89042f9a0b57510546bdca0779e6d1bec40040a428fd9c2-merged.mount: Deactivated successfully.
Feb 01 14:51:00 compute-0 podman[85679]: 2026-02-01 14:51:00.67986354 +0000 UTC m=+1.201239706 container remove 8a5b4dc7b3a0fca17e74c9a3341a38d689cdaa52941eb529d598bb3e6e43e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 01 14:51:00 compute-0 podman[85950]: 2026-02-01 14:51:00.872025961 +0000 UTC m=+0.039597334 container create 88ca06885fff5877af0746f2bfb486b74d1bbfecc90ba45ce08268f6f0c5c87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f569bf096b637537c69eedc48395d4e4145d0dacd8e7a7253f5cf63b9f527f3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f569bf096b637537c69eedc48395d4e4145d0dacd8e7a7253f5cf63b9f527f3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f569bf096b637537c69eedc48395d4e4145d0dacd8e7a7253f5cf63b9f527f3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f569bf096b637537c69eedc48395d4e4145d0dacd8e7a7253f5cf63b9f527f3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f569bf096b637537c69eedc48395d4e4145d0dacd8e7a7253f5cf63b9f527f3d/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:00 compute-0 podman[85950]: 2026-02-01 14:51:00.91420524 +0000 UTC m=+0.081776593 container init 88ca06885fff5877af0746f2bfb486b74d1bbfecc90ba45ce08268f6f0c5c87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 01 14:51:00 compute-0 podman[85950]: 2026-02-01 14:51:00.919884408 +0000 UTC m=+0.087455751 container start 88ca06885fff5877af0746f2bfb486b74d1bbfecc90ba45ce08268f6f0c5c87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:51:00 compute-0 bash[85950]: 88ca06885fff5877af0746f2bfb486b74d1bbfecc90ba45ce08268f6f0c5c87a
Feb 01 14:51:00 compute-0 podman[85950]: 2026-02-01 14:51:00.855064628 +0000 UTC m=+0.022635991 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:00 compute-0 systemd[1]: Started Ceph osd.0 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb 01 14:51:00 compute-0 ceph-mon[75179]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:51:00 compute-0 ceph-osd[85969]: set uid:gid to 167:167 (ceph:ceph)
Feb 01 14:51:00 compute-0 ceph-osd[85969]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Feb 01 14:51:00 compute-0 ceph-osd[85969]: pidfile_write: ignore empty --pid-file
Feb 01 14:51:00 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 01 14:51:00 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 01 14:51:00 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:00 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:00 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) close
Feb 01 14:51:00 compute-0 sudo[85396]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:51:00 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:51:00 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 01 14:51:00 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 01 14:51:00 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:00 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:00 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) close
Feb 01 14:51:00 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Feb 01 14:51:00 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb 01 14:51:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:51:00 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:00 compute-0 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Feb 01 14:51:00 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) close
Feb 01 14:51:01 compute-0 sudo[85983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:01 compute-0 sudo[85983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:01 compute-0 sudo[85983]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) close
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) close
Feb 01 14:51:01 compute-0 sudo[86012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:51:01 compute-0 sudo[86012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e400 /var/lib/ceph/osd/ceph-0/block) close
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138e000 /var/lib/ceph/osd/ceph-0/block) close
Feb 01 14:51:01 compute-0 ceph-osd[85969]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Feb 01 14:51:01 compute-0 ceph-osd[85969]: load: jerasure load: lrc 
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) close
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) close
Feb 01 14:51:01 compute-0 ceph-osd[85969]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb 01 14:51:01 compute-0 ceph-osd[85969]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) close
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) close
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) close
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b6138fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b62025800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b62025800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b62025800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b62025800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluefs mount
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluefs mount shared_bdev_used = 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: RocksDB version: 7.9.2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Git sha 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: DB SUMMARY
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: DB Session ID:  WQ9Z5ULV32HB55I5VYO8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: CURRENT file:  CURRENT
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: IDENTITY file:  IDENTITY
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                         Options.error_if_exists: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.create_if_missing: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                         Options.paranoid_checks: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                                     Options.env: 0x563b6121fea0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                                Options.info_log: 0x563b622708a0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_file_opening_threads: 16
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                              Options.statistics: (nil)
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.use_fsync: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.max_log_file_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                         Options.allow_fallocate: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.use_direct_reads: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.create_missing_column_families: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                              Options.db_log_dir: 
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                                 Options.wal_dir: db.wal
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.advise_random_on_open: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.write_buffer_manager: 0x563b61284b40
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                            Options.rate_limiter: (nil)
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.unordered_write: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.row_cache: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                              Options.wal_filter: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.allow_ingest_behind: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.two_write_queues: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.manual_wal_flush: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.wal_compression: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.atomic_flush: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.log_readahead_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.allow_data_in_errors: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.db_host_id: __hostname__
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.max_background_jobs: 4
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.max_background_compactions: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.max_subcompactions: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.max_open_files: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.bytes_per_sync: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.max_background_flushes: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Compression algorithms supported:
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         kZSTD supported: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         kXpressCompression supported: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         kBZip2Compression supported: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         kLZ4Compression supported: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         kZlibCompression supported: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         kLZ4HCCompression supported: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         kSnappyCompression supported: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62270c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563b612238d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62270c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563b612238d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62270c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563b612238d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62270c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563b612238d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62270c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563b612238d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62270c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563b612238d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62270c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563b612238d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62270c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563b61223a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62270c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563b61223a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62270c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563b61223a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1ec2841c-47a7-4a7e-b481-3d3f5da60a1c
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957461313862, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957461315145, "job": 1, "event": "recovery_finished"}
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: freelist init
Feb 01 14:51:01 compute-0 ceph-osd[85969]: freelist _read_cfg
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluefs umount
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b62025800 /var/lib/ceph/osd/ceph-0/block) close
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b62025800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b62025800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b62025800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bdev(0x563b62025800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluefs mount
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluefs mount shared_bdev_used = 27262976
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: RocksDB version: 7.9.2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Git sha 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: DB SUMMARY
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: DB Session ID:  WQ9Z5ULV32HB55I5VYO9
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: CURRENT file:  CURRENT
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: IDENTITY file:  IDENTITY
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                         Options.error_if_exists: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.create_if_missing: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                         Options.paranoid_checks: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                                     Options.env: 0x563b6121fce0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                                Options.info_log: 0x563b62270960
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_file_opening_threads: 16
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                              Options.statistics: (nil)
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.use_fsync: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.max_log_file_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                         Options.allow_fallocate: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.use_direct_reads: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.create_missing_column_families: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                              Options.db_log_dir: 
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                                 Options.wal_dir: db.wal
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.advise_random_on_open: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.write_buffer_manager: 0x563b61285900
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                            Options.rate_limiter: (nil)
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.unordered_write: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.row_cache: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                              Options.wal_filter: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.allow_ingest_behind: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.two_write_queues: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.manual_wal_flush: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.wal_compression: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.atomic_flush: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.log_readahead_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.allow_data_in_errors: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.db_host_id: __hostname__
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.max_background_jobs: 4
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.max_background_compactions: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.max_subcompactions: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.max_open_files: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.bytes_per_sync: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.max_background_flushes: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Compression algorithms supported:
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         kZSTD supported: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         kXpressCompression supported: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         kBZip2Compression supported: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         kLZ4Compression supported: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         kZlibCompression supported: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         kLZ4HCCompression supported: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         kSnappyCompression supported: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62271840)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563b612238d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62271840)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563b612238d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62271840)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563b612238d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62271840)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563b612238d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62271840)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563b612238d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62271840)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563b612238d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62271840)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563b612238d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62271d80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563b61223a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62271d80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563b61223a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b62271d80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563b61223a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1ec2841c-47a7-4a7e-b481-3d3f5da60a1c
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957461347692, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957461351939, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957461, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2841c-47a7-4a7e-b481-3d3f5da60a1c", "db_session_id": "WQ9Z5ULV32HB55I5VYO9", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957461354875, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 467, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957461, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2841c-47a7-4a7e-b481-3d3f5da60a1c", "db_session_id": "WQ9Z5ULV32HB55I5VYO9", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957461357312, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957461, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2841c-47a7-4a7e-b481-3d3f5da60a1c", "db_session_id": "WQ9Z5ULV32HB55I5VYO9", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957461358612, "job": 1, "event": "recovery_finished"}
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563b6248a000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: DB pointer 0x563b6242a000
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Feb 01 14:51:01 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 14:51:01 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b61223a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b61223a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b61223a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 01 14:51:01 compute-0 ceph-osd[85969]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb 01 14:51:01 compute-0 ceph-osd[85969]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb 01 14:51:01 compute-0 ceph-osd[85969]: _get_class not permitted to load lua
Feb 01 14:51:01 compute-0 ceph-osd[85969]: _get_class not permitted to load sdk
Feb 01 14:51:01 compute-0 ceph-osd[85969]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb 01 14:51:01 compute-0 ceph-osd[85969]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb 01 14:51:01 compute-0 ceph-osd[85969]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb 01 14:51:01 compute-0 ceph-osd[85969]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb 01 14:51:01 compute-0 ceph-osd[85969]: osd.0 0 load_pgs
Feb 01 14:51:01 compute-0 ceph-osd[85969]: osd.0 0 load_pgs opened 0 pgs
Feb 01 14:51:01 compute-0 ceph-osd[85969]: osd.0 0 log_to_monitors true
Feb 01 14:51:01 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0[85965]: 2026-02-01T14:51:01.383+0000 7f96e46208c0 -1 osd.0 0 log_to_monitors true
Feb 01 14:51:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Feb 01 14:51:01 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1631172060,v1:192.168.122.100:6803/1631172060]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Feb 01 14:51:01 compute-0 podman[86507]: 2026-02-01 14:51:01.451726159 +0000 UTC m=+0.033831293 container create 9cff356744490323da4505c651fb10c65db677c8e44ba13c168dc703a20a451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_elion, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 01 14:51:01 compute-0 systemd[1]: Started libpod-conmon-9cff356744490323da4505c651fb10c65db677c8e44ba13c168dc703a20a451a.scope.
Feb 01 14:51:01 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:01 compute-0 podman[86507]: 2026-02-01 14:51:01.507241163 +0000 UTC m=+0.089346297 container init 9cff356744490323da4505c651fb10c65db677c8e44ba13c168dc703a20a451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_elion, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 01 14:51:01 compute-0 podman[86507]: 2026-02-01 14:51:01.511767487 +0000 UTC m=+0.093872621 container start 9cff356744490323da4505c651fb10c65db677c8e44ba13c168dc703a20a451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_elion, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 01 14:51:01 compute-0 podman[86507]: 2026-02-01 14:51:01.515029214 +0000 UTC m=+0.097134378 container attach 9cff356744490323da4505c651fb10c65db677c8e44ba13c168dc703a20a451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:01 compute-0 gifted_elion[86524]: 167 167
Feb 01 14:51:01 compute-0 systemd[1]: libpod-9cff356744490323da4505c651fb10c65db677c8e44ba13c168dc703a20a451a.scope: Deactivated successfully.
Feb 01 14:51:01 compute-0 conmon[86524]: conmon 9cff356744490323da45 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9cff356744490323da4505c651fb10c65db677c8e44ba13c168dc703a20a451a.scope/container/memory.events
Feb 01 14:51:01 compute-0 podman[86507]: 2026-02-01 14:51:01.435106117 +0000 UTC m=+0.017211271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:01 compute-0 podman[86529]: 2026-02-01 14:51:01.546060693 +0000 UTC m=+0.019864659 container died 9cff356744490323da4505c651fb10c65db677c8e44ba13c168dc703a20a451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 01 14:51:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-11ea1f2011328170ce06779371618b61b185203fecc1f11724b26a1c3f510067-merged.mount: Deactivated successfully.
Feb 01 14:51:01 compute-0 podman[86529]: 2026-02-01 14:51:01.573515836 +0000 UTC m=+0.047319802 container remove 9cff356744490323da4505c651fb10c65db677c8e44ba13c168dc703a20a451a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_elion, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 01 14:51:01 compute-0 systemd[1]: libpod-conmon-9cff356744490323da4505c651fb10c65db677c8e44ba13c168dc703a20a451a.scope: Deactivated successfully.
Feb 01 14:51:01 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:51:01 compute-0 podman[86554]: 2026-02-01 14:51:01.719934622 +0000 UTC m=+0.028880816 container create 973e1d8b1399f57a1029869f61c7dda34848bc2a722007644a84ec9592708b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate-test, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:01 compute-0 systemd[1]: Started libpod-conmon-973e1d8b1399f57a1029869f61c7dda34848bc2a722007644a84ec9592708b08.scope.
Feb 01 14:51:01 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/224751a40f08f1c7e30fcbac1ff41b2bc60a77f89cb694ccc4650444d7f62a23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/224751a40f08f1c7e30fcbac1ff41b2bc60a77f89cb694ccc4650444d7f62a23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/224751a40f08f1c7e30fcbac1ff41b2bc60a77f89cb694ccc4650444d7f62a23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/224751a40f08f1c7e30fcbac1ff41b2bc60a77f89cb694ccc4650444d7f62a23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/224751a40f08f1c7e30fcbac1ff41b2bc60a77f89cb694ccc4650444d7f62a23/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:01 compute-0 podman[86554]: 2026-02-01 14:51:01.773086266 +0000 UTC m=+0.082032470 container init 973e1d8b1399f57a1029869f61c7dda34848bc2a722007644a84ec9592708b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 01 14:51:01 compute-0 podman[86554]: 2026-02-01 14:51:01.781835636 +0000 UTC m=+0.090781850 container start 973e1d8b1399f57a1029869f61c7dda34848bc2a722007644a84ec9592708b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate-test, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:01 compute-0 podman[86554]: 2026-02-01 14:51:01.785303098 +0000 UTC m=+0.094249292 container attach 973e1d8b1399f57a1029869f61c7dda34848bc2a722007644a84ec9592708b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate-test, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 01 14:51:01 compute-0 podman[86554]: 2026-02-01 14:51:01.707732851 +0000 UTC m=+0.016679065 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:01 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate-test[86570]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Feb 01 14:51:01 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate-test[86570]:                             [--no-systemd] [--no-tmpfs]
Feb 01 14:51:01 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate-test[86570]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb 01 14:51:01 compute-0 systemd[1]: libpod-973e1d8b1399f57a1029869f61c7dda34848bc2a722007644a84ec9592708b08.scope: Deactivated successfully.
Feb 01 14:51:01 compute-0 podman[86554]: 2026-02-01 14:51:01.934592799 +0000 UTC m=+0.243539033 container died 973e1d8b1399f57a1029869f61c7dda34848bc2a722007644a84ec9592708b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 01 14:51:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Feb 01 14:51:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 01 14:51:01 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:01 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:01 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb 01 14:51:01 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:01 compute-0 ceph-mon[75179]: Deploying daemon osd.1 on compute-0
Feb 01 14:51:01 compute-0 ceph-mon[75179]: from='osd.0 [v2:192.168.122.100:6802/1631172060,v1:192.168.122.100:6803/1631172060]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Feb 01 14:51:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-224751a40f08f1c7e30fcbac1ff41b2bc60a77f89cb694ccc4650444d7f62a23-merged.mount: Deactivated successfully.
Feb 01 14:51:02 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1631172060,v1:192.168.122.100:6803/1631172060]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Feb 01 14:51:02 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Feb 01 14:51:02 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Feb 01 14:51:02 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Feb 01 14:51:02 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1631172060,v1:192.168.122.100:6803/1631172060]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 01 14:51:02 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Feb 01 14:51:02 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 01 14:51:02 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 01 14:51:02 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 01 14:51:02 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 01 14:51:02 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 01 14:51:02 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 01 14:51:02 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:02 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 01 14:51:02 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 01 14:51:02 compute-0 podman[86554]: 2026-02-01 14:51:02.017455753 +0000 UTC m=+0.326401957 container remove 973e1d8b1399f57a1029869f61c7dda34848bc2a722007644a84ec9592708b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:51:02 compute-0 systemd[1]: libpod-conmon-973e1d8b1399f57a1029869f61c7dda34848bc2a722007644a84ec9592708b08.scope: Deactivated successfully.
Feb 01 14:51:02 compute-0 systemd[1]: Reloading.
Feb 01 14:51:02 compute-0 systemd-rc-local-generator[86633]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:51:02 compute-0 systemd-sysv-generator[86638]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:51:02 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb 01 14:51:02 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb 01 14:51:02 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:51:02 compute-0 systemd[1]: Reloading.
Feb 01 14:51:02 compute-0 systemd-rc-local-generator[86674]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:51:02 compute-0 systemd-sysv-generator[86678]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:51:02 compute-0 systemd[1]: Starting Ceph osd.1 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb 01 14:51:02 compute-0 podman[86733]: 2026-02-01 14:51:02.961543422 +0000 UTC m=+0.047922160 container create c488d3f590b50037a3b089bb0106796ae2a2c9635b5411e32a5db47ce25bc7e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 01 14:51:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Feb 01 14:51:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 01 14:51:03 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1631172060,v1:192.168.122.100:6803/1631172060]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 01 14:51:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Feb 01 14:51:03 compute-0 ceph-osd[85969]: osd.0 0 done with init, starting boot process
Feb 01 14:51:03 compute-0 ceph-osd[85969]: osd.0 0 start_boot
Feb 01 14:51:03 compute-0 ceph-osd[85969]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb 01 14:51:03 compute-0 ceph-osd[85969]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb 01 14:51:03 compute-0 ceph-osd[85969]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb 01 14:51:03 compute-0 ceph-osd[85969]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb 01 14:51:03 compute-0 ceph-osd[85969]: osd.0 0  bench count 12288000 bsize 4 KiB
Feb 01 14:51:03 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Feb 01 14:51:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 01 14:51:03 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 01 14:51:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 01 14:51:03 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 01 14:51:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 01 14:51:03 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 01 14:51:03 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 01 14:51:03 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 01 14:51:03 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:03 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:03 compute-0 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1631172060; not ready for session (expect reconnect)
Feb 01 14:51:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 01 14:51:03 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 01 14:51:03 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 01 14:51:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565f38ba7684cfc16bc6c5da46d6e9fab53f8bcd56a473cc2a7a4c2fcc3490d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565f38ba7684cfc16bc6c5da46d6e9fab53f8bcd56a473cc2a7a4c2fcc3490d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565f38ba7684cfc16bc6c5da46d6e9fab53f8bcd56a473cc2a7a4c2fcc3490d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565f38ba7684cfc16bc6c5da46d6e9fab53f8bcd56a473cc2a7a4c2fcc3490d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565f38ba7684cfc16bc6c5da46d6e9fab53f8bcd56a473cc2a7a4c2fcc3490d4/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:03 compute-0 ceph-mon[75179]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:51:03 compute-0 ceph-mon[75179]: from='osd.0 [v2:192.168.122.100:6802/1631172060,v1:192.168.122.100:6803/1631172060]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Feb 01 14:51:03 compute-0 ceph-mon[75179]: osdmap e7: 3 total, 0 up, 3 in
Feb 01 14:51:03 compute-0 ceph-mon[75179]: from='osd.0 [v2:192.168.122.100:6802/1631172060,v1:192.168.122.100:6803/1631172060]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 01 14:51:03 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 01 14:51:03 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 01 14:51:03 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:03 compute-0 podman[86733]: 2026-02-01 14:51:02.941794307 +0000 UTC m=+0.028173035 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:03 compute-0 podman[86733]: 2026-02-01 14:51:03.041427367 +0000 UTC m=+0.127806095 container init c488d3f590b50037a3b089bb0106796ae2a2c9635b5411e32a5db47ce25bc7e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:03 compute-0 podman[86733]: 2026-02-01 14:51:03.053824914 +0000 UTC m=+0.140203612 container start c488d3f590b50037a3b089bb0106796ae2a2c9635b5411e32a5db47ce25bc7e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:03 compute-0 podman[86733]: 2026-02-01 14:51:03.070872839 +0000 UTC m=+0.157251567 container attach c488d3f590b50037a3b089bb0106796ae2a2c9635b5411e32a5db47ce25bc7e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 01 14:51:03 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:51:03 compute-0 bash[86733]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:51:03 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:51:03 compute-0 bash[86733]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:51:03 compute-0 lvm[86831]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 14:51:03 compute-0 lvm[86831]: VG ceph_vg0 finished
Feb 01 14:51:03 compute-0 lvm[86834]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 14:51:03 compute-0 lvm[86834]: VG ceph_vg1 finished
Feb 01 14:51:03 compute-0 lvm[86836]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 14:51:03 compute-0 lvm[86836]: VG ceph_vg2 finished
Feb 01 14:51:03 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:51:03 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 01 14:51:03 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:51:03 compute-0 bash[86733]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 01 14:51:03 compute-0 bash[86733]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:51:03 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:51:03 compute-0 bash[86733]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:51:03 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 01 14:51:03 compute-0 bash[86733]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 01 14:51:03 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb 01 14:51:03 compute-0 bash[86733]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb 01 14:51:03 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb 01 14:51:03 compute-0 bash[86733]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb 01 14:51:03 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb 01 14:51:03 compute-0 bash[86733]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb 01 14:51:03 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb 01 14:51:03 compute-0 bash[86733]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb 01 14:51:03 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 01 14:51:03 compute-0 bash[86733]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 01 14:51:04 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate[86748]: --> ceph-volume lvm activate successful for osd ID: 1
Feb 01 14:51:04 compute-0 bash[86733]: --> ceph-volume lvm activate successful for osd ID: 1
Feb 01 14:51:04 compute-0 systemd[1]: libpod-c488d3f590b50037a3b089bb0106796ae2a2c9635b5411e32a5db47ce25bc7e1.scope: Deactivated successfully.
Feb 01 14:51:04 compute-0 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1631172060; not ready for session (expect reconnect)
Feb 01 14:51:04 compute-0 systemd[1]: libpod-c488d3f590b50037a3b089bb0106796ae2a2c9635b5411e32a5db47ce25bc7e1.scope: Consumed 1.174s CPU time.
Feb 01 14:51:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 01 14:51:04 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 01 14:51:04 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 01 14:51:04 compute-0 ceph-mon[75179]: from='osd.0 [v2:192.168.122.100:6802/1631172060,v1:192.168.122.100:6803/1631172060]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 01 14:51:04 compute-0 ceph-mon[75179]: osdmap e8: 3 total, 0 up, 3 in
Feb 01 14:51:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 01 14:51:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 01 14:51:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 01 14:51:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 01 14:51:04 compute-0 podman[86932]: 2026-02-01 14:51:04.055044894 +0000 UTC m=+0.020457736 container died c488d3f590b50037a3b089bb0106796ae2a2c9635b5411e32a5db47ce25bc7e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:51:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-565f38ba7684cfc16bc6c5da46d6e9fab53f8bcd56a473cc2a7a4c2fcc3490d4-merged.mount: Deactivated successfully.
Feb 01 14:51:04 compute-0 podman[86932]: 2026-02-01 14:51:04.149475901 +0000 UTC m=+0.114888723 container remove c488d3f590b50037a3b089bb0106796ae2a2c9635b5411e32a5db47ce25bc7e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:04 compute-0 podman[86991]: 2026-02-01 14:51:04.303254455 +0000 UTC m=+0.040849141 container create 751c852b5ece59ea81e1fdc2a19e739eff9c738c284ce0a5ed502314ad0a4720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 01 14:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e46a8b1caa92540a80a4350e46370f995da455202f511a4fc6ddb23d8e4107/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e46a8b1caa92540a80a4350e46370f995da455202f511a4fc6ddb23d8e4107/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e46a8b1caa92540a80a4350e46370f995da455202f511a4fc6ddb23d8e4107/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e46a8b1caa92540a80a4350e46370f995da455202f511a4fc6ddb23d8e4107/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e46a8b1caa92540a80a4350e46370f995da455202f511a4fc6ddb23d8e4107/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:04 compute-0 podman[86991]: 2026-02-01 14:51:04.373909458 +0000 UTC m=+0.111504164 container init 751c852b5ece59ea81e1fdc2a19e739eff9c738c284ce0a5ed502314ad0a4720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 01 14:51:04 compute-0 podman[86991]: 2026-02-01 14:51:04.37906489 +0000 UTC m=+0.116659596 container start 751c852b5ece59ea81e1fdc2a19e739eff9c738c284ce0a5ed502314ad0a4720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 01 14:51:04 compute-0 podman[86991]: 2026-02-01 14:51:04.283923263 +0000 UTC m=+0.021517959 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:04 compute-0 bash[86991]: 751c852b5ece59ea81e1fdc2a19e739eff9c738c284ce0a5ed502314ad0a4720
Feb 01 14:51:04 compute-0 systemd[1]: Started Ceph osd.1 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb 01 14:51:04 compute-0 ceph-osd[87011]: set uid:gid to 167:167 (ceph:ceph)
Feb 01 14:51:04 compute-0 ceph-osd[87011]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: pidfile_write: ignore empty --pid-file
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) close
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) close
Feb 01 14:51:04 compute-0 sudo[86012]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) close
Feb 01 14:51:04 compute-0 ceph-mgr[75469]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 01 14:51:04 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) close
Feb 01 14:51:04 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Feb 01 14:51:04 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb 01 14:51:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:51:04 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:04 compute-0 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Feb 01 14:51:04 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) close
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6400 /var/lib/ceph/osd/ceph-1/block) close
Feb 01 14:51:04 compute-0 sudo[87029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:04 compute-0 sudo[87029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:04 compute-0 sudo[87029]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab6000 /var/lib/ceph/osd/ceph-1/block) close
Feb 01 14:51:04 compute-0 ceph-osd[87011]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Feb 01 14:51:04 compute-0 ceph-osd[87011]: load: jerasure load: lrc 
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) close
Feb 01 14:51:04 compute-0 sudo[87060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) close
Feb 01 14:51:04 compute-0 sudo[87060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:04 compute-0 ceph-osd[87011]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb 01 14:51:04 compute-0 ceph-osd[87011]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) close
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) close
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) close
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03aab7c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03b74d800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03b74d800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03b74d800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03b74d800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluefs mount
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluefs mount shared_bdev_used = 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: RocksDB version: 7.9.2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Git sha 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: DB SUMMARY
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: DB Session ID:  WI5QOFCFHXU9QXVNGRAO
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: CURRENT file:  CURRENT
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: IDENTITY file:  IDENTITY
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                         Options.error_if_exists: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.create_if_missing: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                         Options.paranoid_checks: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                                     Options.env: 0x55a03a947ea0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                                Options.info_log: 0x55a03b99a8a0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_file_opening_threads: 16
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                              Options.statistics: (nil)
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.use_fsync: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.max_log_file_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                         Options.allow_fallocate: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.use_direct_reads: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.create_missing_column_families: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                              Options.db_log_dir: 
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                                 Options.wal_dir: db.wal
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.advise_random_on_open: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.write_buffer_manager: 0x55a03a9acb40
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                            Options.rate_limiter: (nil)
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.unordered_write: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.row_cache: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                              Options.wal_filter: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.allow_ingest_behind: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.two_write_queues: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.manual_wal_flush: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.wal_compression: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.atomic_flush: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.log_readahead_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.allow_data_in_errors: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.db_host_id: __hostname__
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.max_background_jobs: 4
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.max_background_compactions: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.max_subcompactions: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.max_open_files: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.bytes_per_sync: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.max_background_flushes: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Compression algorithms supported:
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         kZSTD supported: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         kXpressCompression supported: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         kBZip2Compression supported: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         kLZ4Compression supported: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         kZlibCompression supported: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         kLZ4HCCompression supported: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         kSnappyCompression supported: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b99ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a03a94b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b99ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a03a94b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b99ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a03a94b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b99ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a03a94b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b99ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a03a94b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b99ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a03a94b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b99ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a03a94b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b99ac80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a03a94ba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b99ac80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a03a94ba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b99ac80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a03a94ba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5318f7f2-9ea0-4f24-ab8e-6aafc2a90c2d
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957464695574, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957464696768, "job": 1, "event": "recovery_finished"}
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: freelist init
Feb 01 14:51:04 compute-0 ceph-osd[87011]: freelist _read_cfg
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluefs umount
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03b74d800 /var/lib/ceph/osd/ceph-1/block) close
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03b74d800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03b74d800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03b74d800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bdev(0x55a03b74d800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluefs mount
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluefs mount shared_bdev_used = 27262976
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: RocksDB version: 7.9.2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Git sha 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: DB SUMMARY
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: DB Session ID:  WI5QOFCFHXU9QXVNGRAP
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: CURRENT file:  CURRENT
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: IDENTITY file:  IDENTITY
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                         Options.error_if_exists: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.create_if_missing: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                         Options.paranoid_checks: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                                     Options.env: 0x55a03b793dc0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                                Options.info_log: 0x55a03b99b340
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_file_opening_threads: 16
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                              Options.statistics: (nil)
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.use_fsync: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.max_log_file_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                         Options.allow_fallocate: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.use_direct_reads: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.create_missing_column_families: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                              Options.db_log_dir: 
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                                 Options.wal_dir: db.wal
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.advise_random_on_open: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.write_buffer_manager: 0x55a03a9ad900
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                            Options.rate_limiter: (nil)
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.unordered_write: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.row_cache: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                              Options.wal_filter: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.allow_ingest_behind: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.two_write_queues: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.manual_wal_flush: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.wal_compression: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.atomic_flush: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.log_readahead_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.allow_data_in_errors: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.db_host_id: __hostname__
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.max_background_jobs: 4
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.max_background_compactions: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.max_subcompactions: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.max_open_files: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.bytes_per_sync: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.max_background_flushes: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Compression algorithms supported:
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         kZSTD supported: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         kXpressCompression supported: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         kBZip2Compression supported: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         kLZ4Compression supported: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         kZlibCompression supported: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         kLZ4HCCompression supported: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         kSnappyCompression supported: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b9e7680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a03a94b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b9e7680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a03a94b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b9e7680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a03a94b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b9e7680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a03a94b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b9e7680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a03a94b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b9e7680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a03a94b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b9e7680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a03a94b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b9e7800)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a03a94b4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b9e7800)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a03a94b4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a03b9e7800)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a03a94b4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5318f7f2-9ea0-4f24-ab8e-6aafc2a90c2d
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957464747951, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957464765329, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957464, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5318f7f2-9ea0-4f24-ab8e-6aafc2a90c2d", "db_session_id": "WI5QOFCFHXU9QXVNGRAP", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957464785532, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957464, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5318f7f2-9ea0-4f24-ab8e-6aafc2a90c2d", "db_session_id": "WI5QOFCFHXU9QXVNGRAP", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957464788132, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957464, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5318f7f2-9ea0-4f24-ab8e-6aafc2a90c2d", "db_session_id": "WI5QOFCFHXU9QXVNGRAP", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957464789445, "job": 1, "event": "recovery_finished"}
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a03bbb3c00
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: DB pointer 0x55a03bb54000
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Feb 01 14:51:04 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 14:51:04 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b4b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b4b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b4b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 01 14:51:04 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb 01 14:51:04 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb 01 14:51:04 compute-0 ceph-osd[87011]: _get_class not permitted to load lua
Feb 01 14:51:04 compute-0 ceph-osd[87011]: _get_class not permitted to load sdk
Feb 01 14:51:04 compute-0 ceph-osd[87011]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb 01 14:51:04 compute-0 ceph-osd[87011]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb 01 14:51:04 compute-0 ceph-osd[87011]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb 01 14:51:04 compute-0 ceph-osd[87011]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb 01 14:51:04 compute-0 ceph-osd[87011]: osd.1 0 load_pgs
Feb 01 14:51:04 compute-0 ceph-osd[87011]: osd.1 0 load_pgs opened 0 pgs
Feb 01 14:51:04 compute-0 ceph-osd[87011]: osd.1 0 log_to_monitors true
Feb 01 14:51:04 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1[87007]: 2026-02-01T14:51:04.887+0000 7fe8ab9508c0 -1 osd.1 0 log_to_monitors true
Feb 01 14:51:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Feb 01 14:51:04 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/133289609,v1:192.168.122.100:6807/133289609]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Feb 01 14:51:05 compute-0 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1631172060; not ready for session (expect reconnect)
Feb 01 14:51:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 01 14:51:05 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 01 14:51:05 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 01 14:51:05 compute-0 podman[87550]: 2026-02-01 14:51:05.046112405 +0000 UTC m=+0.047228320 container create 829268a94c33dac48a34e40e8f15402261f5b78235084b1ab3ceb3c0f4fc7dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 01 14:51:05 compute-0 ceph-mon[75179]: purged_snaps scrub starts
Feb 01 14:51:05 compute-0 ceph-mon[75179]: purged_snaps scrub ok
Feb 01 14:51:05 compute-0 ceph-mon[75179]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:51:05 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:05 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:05 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb 01 14:51:05 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:05 compute-0 ceph-mon[75179]: from='osd.1 [v2:192.168.122.100:6806/133289609,v1:192.168.122.100:6807/133289609]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Feb 01 14:51:05 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 01 14:51:05 compute-0 systemd[1]: Started libpod-conmon-829268a94c33dac48a34e40e8f15402261f5b78235084b1ab3ceb3c0f4fc7dae.scope.
Feb 01 14:51:05 compute-0 podman[87550]: 2026-02-01 14:51:05.017108476 +0000 UTC m=+0.018224411 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:05 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:05 compute-0 podman[87550]: 2026-02-01 14:51:05.136265685 +0000 UTC m=+0.137381630 container init 829268a94c33dac48a34e40e8f15402261f5b78235084b1ab3ceb3c0f4fc7dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_darwin, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 01 14:51:05 compute-0 podman[87550]: 2026-02-01 14:51:05.145053135 +0000 UTC m=+0.146169050 container start 829268a94c33dac48a34e40e8f15402261f5b78235084b1ab3ceb3c0f4fc7dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_darwin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:51:05 compute-0 strange_darwin[87566]: 167 167
Feb 01 14:51:05 compute-0 systemd[1]: libpod-829268a94c33dac48a34e40e8f15402261f5b78235084b1ab3ceb3c0f4fc7dae.scope: Deactivated successfully.
Feb 01 14:51:05 compute-0 podman[87550]: 2026-02-01 14:51:05.152077693 +0000 UTC m=+0.153193608 container attach 829268a94c33dac48a34e40e8f15402261f5b78235084b1ab3ceb3c0f4fc7dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_darwin, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:05 compute-0 podman[87550]: 2026-02-01 14:51:05.152792184 +0000 UTC m=+0.153908099 container died 829268a94c33dac48a34e40e8f15402261f5b78235084b1ab3ceb3c0f4fc7dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b005c584dc881fb6c10e5cc491b24acdc121745cfea47196a56a07c21d9d1ed-merged.mount: Deactivated successfully.
Feb 01 14:51:05 compute-0 podman[87550]: 2026-02-01 14:51:05.204196846 +0000 UTC m=+0.205312761 container remove 829268a94c33dac48a34e40e8f15402261f5b78235084b1ab3ceb3c0f4fc7dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_darwin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:05 compute-0 systemd[1]: libpod-conmon-829268a94c33dac48a34e40e8f15402261f5b78235084b1ab3ceb3c0f4fc7dae.scope: Deactivated successfully.
Feb 01 14:51:05 compute-0 ceph-osd[85969]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 48.513 iops: 12419.418 elapsed_sec: 0.242
Feb 01 14:51:05 compute-0 ceph-osd[85969]: log_channel(cluster) log [WRN] : OSD bench result of 12419.417952 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 01 14:51:05 compute-0 ceph-osd[85969]: osd.0 0 waiting for initial osdmap
Feb 01 14:51:05 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0[85965]: 2026-02-01T14:51:05.388+0000 7f96e0db4640 -1 osd.0 0 waiting for initial osdmap
Feb 01 14:51:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:51:05 compute-0 ceph-osd[85969]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Feb 01 14:51:05 compute-0 ceph-osd[85969]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Feb 01 14:51:05 compute-0 ceph-osd[85969]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Feb 01 14:51:05 compute-0 ceph-osd[85969]: osd.0 8 check_osdmap_features require_osd_release unknown -> tentacle
Feb 01 14:51:05 compute-0 ceph-osd[85969]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 01 14:51:05 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-0[85965]: 2026-02-01T14:51:05.418+0000 7f96db3a7640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 01 14:51:05 compute-0 ceph-osd[85969]: osd.0 8 set_numa_affinity not setting numa affinity
Feb 01 14:51:05 compute-0 ceph-osd[85969]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Feb 01 14:51:05 compute-0 podman[87595]: 2026-02-01 14:51:05.442538955 +0000 UTC m=+0.032929126 container create 9e9f86d3af26098e1c01a30b65696930bd6d4a372c8d4f30ecf2e84ea4bc6a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 01 14:51:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Feb 01 14:51:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 01 14:51:05 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/133289609,v1:192.168.122.100:6807/133289609]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Feb 01 14:51:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Feb 01 14:51:05 compute-0 systemd[1]: Started libpod-conmon-9e9f86d3af26098e1c01a30b65696930bd6d4a372c8d4f30ecf2e84ea4bc6a7c.scope.
Feb 01 14:51:05 compute-0 ceph-mon[75179]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1631172060,v1:192.168.122.100:6803/1631172060] boot
Feb 01 14:51:05 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Feb 01 14:51:05 compute-0 ceph-osd[85969]: osd.0 9 state: booting -> active
Feb 01 14:51:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Feb 01 14:51:05 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/133289609,v1:192.168.122.100:6807/133289609]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 01 14:51:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Feb 01 14:51:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 01 14:51:05 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 01 14:51:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 01 14:51:05 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 01 14:51:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 01 14:51:05 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:05 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 01 14:51:05 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 01 14:51:05 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88db3ac0bb7f95248f5e3dbc4c69f22195e2b620d7266112cfa7c0a2572160db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88db3ac0bb7f95248f5e3dbc4c69f22195e2b620d7266112cfa7c0a2572160db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88db3ac0bb7f95248f5e3dbc4c69f22195e2b620d7266112cfa7c0a2572160db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88db3ac0bb7f95248f5e3dbc4c69f22195e2b620d7266112cfa7c0a2572160db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88db3ac0bb7f95248f5e3dbc4c69f22195e2b620d7266112cfa7c0a2572160db/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:05 compute-0 podman[87595]: 2026-02-01 14:51:05.428092367 +0000 UTC m=+0.018482558 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:05 compute-0 podman[87595]: 2026-02-01 14:51:05.53522153 +0000 UTC m=+0.125611771 container init 9e9f86d3af26098e1c01a30b65696930bd6d4a372c8d4f30ecf2e84ea4bc6a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:05 compute-0 podman[87595]: 2026-02-01 14:51:05.542919958 +0000 UTC m=+0.133310159 container start 9e9f86d3af26098e1c01a30b65696930bd6d4a372c8d4f30ecf2e84ea4bc6a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate-test, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:05 compute-0 podman[87595]: 2026-02-01 14:51:05.54839093 +0000 UTC m=+0.138781271 container attach 9e9f86d3af26098e1c01a30b65696930bd6d4a372c8d4f30ecf2e84ea4bc6a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate-test, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 01 14:51:05 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate-test[87612]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Feb 01 14:51:05 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate-test[87612]:                             [--no-systemd] [--no-tmpfs]
Feb 01 14:51:05 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate-test[87612]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb 01 14:51:05 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:51:05 compute-0 systemd[1]: libpod-9e9f86d3af26098e1c01a30b65696930bd6d4a372c8d4f30ecf2e84ea4bc6a7c.scope: Deactivated successfully.
Feb 01 14:51:05 compute-0 podman[87595]: 2026-02-01 14:51:05.698737402 +0000 UTC m=+0.289127613 container died 9e9f86d3af26098e1c01a30b65696930bd6d4a372c8d4f30ecf2e84ea4bc6a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate-test, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-88db3ac0bb7f95248f5e3dbc4c69f22195e2b620d7266112cfa7c0a2572160db-merged.mount: Deactivated successfully.
Feb 01 14:51:05 compute-0 podman[87595]: 2026-02-01 14:51:05.745827927 +0000 UTC m=+0.336218108 container remove 9e9f86d3af26098e1c01a30b65696930bd6d4a372c8d4f30ecf2e84ea4bc6a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate-test, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 01 14:51:05 compute-0 systemd[1]: libpod-conmon-9e9f86d3af26098e1c01a30b65696930bd6d4a372c8d4f30ecf2e84ea4bc6a7c.scope: Deactivated successfully.
Feb 01 14:51:05 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb 01 14:51:05 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb 01 14:51:05 compute-0 systemd[1]: Reloading.
Feb 01 14:51:06 compute-0 ceph-mon[75179]: Deploying daemon osd.2 on compute-0
Feb 01 14:51:06 compute-0 ceph-mon[75179]: from='osd.1 [v2:192.168.122.100:6806/133289609,v1:192.168.122.100:6807/133289609]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Feb 01 14:51:06 compute-0 ceph-mon[75179]: osd.0 [v2:192.168.122.100:6802/1631172060,v1:192.168.122.100:6803/1631172060] boot
Feb 01 14:51:06 compute-0 ceph-mon[75179]: osdmap e9: 3 total, 1 up, 3 in
Feb 01 14:51:06 compute-0 ceph-mon[75179]: from='osd.1 [v2:192.168.122.100:6806/133289609,v1:192.168.122.100:6807/133289609]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 01 14:51:06 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 01 14:51:06 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 01 14:51:06 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:06 compute-0 systemd-sysv-generator[87680]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:51:06 compute-0 systemd-rc-local-generator[87677]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:51:06 compute-0 systemd[1]: Reloading.
Feb 01 14:51:06 compute-0 systemd-sysv-generator[87717]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:51:06 compute-0 systemd-rc-local-generator[87714]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:51:06 compute-0 ceph-mgr[75469]: [devicehealth INFO root] creating mgr pool
Feb 01 14:51:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Feb 01 14:51:06 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Feb 01 14:51:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Feb 01 14:51:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 01 14:51:06 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/133289609,v1:192.168.122.100:6807/133289609]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 01 14:51:06 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Feb 01 14:51:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Feb 01 14:51:06 compute-0 ceph-osd[87011]: osd.1 0 done with init, starting boot process
Feb 01 14:51:06 compute-0 ceph-osd[87011]: osd.1 0 start_boot
Feb 01 14:51:06 compute-0 ceph-osd[87011]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb 01 14:51:06 compute-0 systemd[1]: Starting Ceph osd.2 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb 01 14:51:06 compute-0 ceph-osd[87011]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb 01 14:51:06 compute-0 ceph-osd[87011]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb 01 14:51:06 compute-0 ceph-osd[87011]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb 01 14:51:06 compute-0 ceph-osd[87011]: osd.1 0  bench count 12288000 bsize 4 KiB
Feb 01 14:51:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Feb 01 14:51:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Feb 01 14:51:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Feb 01 14:51:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Feb 01 14:51:06 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Feb 01 14:51:06 compute-0 ceph-osd[85969]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb 01 14:51:06 compute-0 ceph-osd[85969]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Feb 01 14:51:06 compute-0 ceph-osd[85969]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb 01 14:51:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 01 14:51:06 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 01 14:51:06 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 01 14:51:06 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 01 14:51:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 01 14:51:06 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Feb 01 14:51:06 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Feb 01 14:51:06 compute-0 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/133289609; not ready for session (expect reconnect)
Feb 01 14:51:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 01 14:51:06 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 01 14:51:06 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 01 14:51:06 compute-0 podman[87779]: 2026-02-01 14:51:06.711833484 +0000 UTC m=+0.048943491 container create 4f212d3c9741444a0f7090a6eba29e36c5c5b74b806de4ee963c69cf9a5593e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:06 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9df92f6f6487e36703c2ce89c02594e667ddf90efbb64189158c5289f08fbe5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9df92f6f6487e36703c2ce89c02594e667ddf90efbb64189158c5289f08fbe5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9df92f6f6487e36703c2ce89c02594e667ddf90efbb64189158c5289f08fbe5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9df92f6f6487e36703c2ce89c02594e667ddf90efbb64189158c5289f08fbe5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9df92f6f6487e36703c2ce89c02594e667ddf90efbb64189158c5289f08fbe5/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:06 compute-0 podman[87779]: 2026-02-01 14:51:06.69448726 +0000 UTC m=+0.031597287 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:06 compute-0 podman[87779]: 2026-02-01 14:51:06.80591892 +0000 UTC m=+0.143028997 container init 4f212d3c9741444a0f7090a6eba29e36c5c5b74b806de4ee963c69cf9a5593e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 01 14:51:06 compute-0 podman[87779]: 2026-02-01 14:51:06.817803322 +0000 UTC m=+0.154913359 container start 4f212d3c9741444a0f7090a6eba29e36c5c5b74b806de4ee963c69cf9a5593e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:06 compute-0 podman[87779]: 2026-02-01 14:51:06.824789709 +0000 UTC m=+0.161899756 container attach 4f212d3c9741444a0f7090a6eba29e36c5c5b74b806de4ee963c69cf9a5593e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:06 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:51:06 compute-0 bash[87779]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:51:06 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:51:06 compute-0 bash[87779]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:51:07 compute-0 ceph-mon[75179]: OSD bench result of 12419.417952 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 01 14:51:07 compute-0 ceph-mon[75179]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 01 14:51:07 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Feb 01 14:51:07 compute-0 ceph-mon[75179]: from='osd.1 [v2:192.168.122.100:6806/133289609,v1:192.168.122.100:6807/133289609]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 01 14:51:07 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Feb 01 14:51:07 compute-0 ceph-mon[75179]: osdmap e10: 3 total, 1 up, 3 in
Feb 01 14:51:07 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 01 14:51:07 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:07 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Feb 01 14:51:07 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 01 14:51:07 compute-0 lvm[87878]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 14:51:07 compute-0 lvm[87878]: VG ceph_vg0 finished
Feb 01 14:51:07 compute-0 lvm[87881]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 14:51:07 compute-0 lvm[87881]: VG ceph_vg1 finished
Feb 01 14:51:07 compute-0 lvm[87883]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 14:51:07 compute-0 lvm[87883]: VG ceph_vg2 finished
Feb 01 14:51:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Feb 01 14:51:07 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Feb 01 14:51:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Feb 01 14:51:07 compute-0 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/133289609; not ready for session (expect reconnect)
Feb 01 14:51:07 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Feb 01 14:51:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 01 14:51:07 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 01 14:51:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 01 14:51:07 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 01 14:51:07 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:07 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 01 14:51:07 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 01 14:51:07 compute-0 bash[87779]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 01 14:51:07 compute-0 bash[87779]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:51:07 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:51:07 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:51:07 compute-0 bash[87779]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 01 14:51:07 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 01 14:51:07 compute-0 bash[87779]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 01 14:51:07 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Feb 01 14:51:07 compute-0 bash[87779]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Feb 01 14:51:07 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb 01 14:51:07 compute-0 bash[87779]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb 01 14:51:07 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Feb 01 14:51:07 compute-0 bash[87779]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Feb 01 14:51:07 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v27: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Feb 01 14:51:07 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb 01 14:51:07 compute-0 bash[87779]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb 01 14:51:07 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 01 14:51:07 compute-0 bash[87779]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 01 14:51:07 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate[87795]: --> ceph-volume lvm activate successful for osd ID: 2
Feb 01 14:51:07 compute-0 bash[87779]: --> ceph-volume lvm activate successful for osd ID: 2
Feb 01 14:51:07 compute-0 systemd[1]: libpod-4f212d3c9741444a0f7090a6eba29e36c5c5b74b806de4ee963c69cf9a5593e3.scope: Deactivated successfully.
Feb 01 14:51:07 compute-0 systemd[1]: libpod-4f212d3c9741444a0f7090a6eba29e36c5c5b74b806de4ee963c69cf9a5593e3.scope: Consumed 1.116s CPU time.
Feb 01 14:51:07 compute-0 podman[87779]: 2026-02-01 14:51:07.726638537 +0000 UTC m=+1.063748534 container died 4f212d3c9741444a0f7090a6eba29e36c5c5b74b806de4ee963c69cf9a5593e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 01 14:51:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9df92f6f6487e36703c2ce89c02594e667ddf90efbb64189158c5289f08fbe5-merged.mount: Deactivated successfully.
Feb 01 14:51:07 compute-0 podman[87779]: 2026-02-01 14:51:07.814522629 +0000 UTC m=+1.151632656 container remove 4f212d3c9741444a0f7090a6eba29e36c5c5b74b806de4ee963c69cf9a5593e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2-activate, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 01 14:51:07 compute-0 podman[88047]: 2026-02-01 14:51:07.989391088 +0000 UTC m=+0.045272762 container create e57f55d1e39c1800879b1c703e9a2465a3d5f5b53936bd9f1a62980cd9b1c29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 01 14:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c8a877f0c77494b5f89776eca99807ce841f3392a79c0f16957d756f8571a0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c8a877f0c77494b5f89776eca99807ce841f3392a79c0f16957d756f8571a0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c8a877f0c77494b5f89776eca99807ce841f3392a79c0f16957d756f8571a0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c8a877f0c77494b5f89776eca99807ce841f3392a79c0f16957d756f8571a0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c8a877f0c77494b5f89776eca99807ce841f3392a79c0f16957d756f8571a0d/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:08 compute-0 podman[88047]: 2026-02-01 14:51:08.051478017 +0000 UTC m=+0.107359661 container init e57f55d1e39c1800879b1c703e9a2465a3d5f5b53936bd9f1a62980cd9b1c29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:08 compute-0 podman[88047]: 2026-02-01 14:51:08.055091414 +0000 UTC m=+0.110973058 container start e57f55d1e39c1800879b1c703e9a2465a3d5f5b53936bd9f1a62980cd9b1c29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:08 compute-0 podman[88047]: 2026-02-01 14:51:07.963046198 +0000 UTC m=+0.018927862 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:08 compute-0 bash[88047]: e57f55d1e39c1800879b1c703e9a2465a3d5f5b53936bd9f1a62980cd9b1c29d
Feb 01 14:51:08 compute-0 systemd[1]: Started Ceph osd.2 for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb 01 14:51:08 compute-0 ceph-osd[88066]: set uid:gid to 167:167 (ceph:ceph)
Feb 01 14:51:08 compute-0 ceph-osd[88066]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: pidfile_write: ignore empty --pid-file
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) close
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) close
Feb 01 14:51:08 compute-0 sudo[87060]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:08 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:51:08 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:08 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) close
Feb 01 14:51:08 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) close
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) close
Feb 01 14:51:08 compute-0 sudo[88082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:08 compute-0 sudo[88082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:08 compute-0 sudo[88082]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a400 /var/lib/ceph/osd/ceph-2/block) close
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87a000 /var/lib/ceph/osd/ceph-2/block) close
Feb 01 14:51:08 compute-0 ceph-osd[88066]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Feb 01 14:51:08 compute-0 sudo[88111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 14:51:08 compute-0 ceph-osd[88066]: load: jerasure load: lrc 
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) close
Feb 01 14:51:08 compute-0 sudo[88111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) close
Feb 01 14:51:08 compute-0 ceph-osd[88066]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb 01 14:51:08 compute-0 ceph-osd[88066]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) close
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) close
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) close
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7e87bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7f51b800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7f51b800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7f51b800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7f51b800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluefs mount
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluefs mount shared_bdev_used = 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: RocksDB version: 7.9.2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Git sha 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: DB SUMMARY
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: DB Session ID:  FEEPM6SA8484YKKK65Q9
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: CURRENT file:  CURRENT
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: IDENTITY file:  IDENTITY
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                         Options.error_if_exists: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.create_if_missing: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                         Options.paranoid_checks: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                                     Options.env: 0x560d7e70bf80
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                                Options.info_log: 0x560d7f7668a0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_file_opening_threads: 16
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                              Options.statistics: (nil)
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.use_fsync: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.max_log_file_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                         Options.allow_fallocate: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.use_direct_reads: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.create_missing_column_families: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                              Options.db_log_dir: 
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                                 Options.wal_dir: db.wal
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.advise_random_on_open: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.write_buffer_manager: 0x560d7f60ab40
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                            Options.rate_limiter: (nil)
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.unordered_write: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.row_cache: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                              Options.wal_filter: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.allow_ingest_behind: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.two_write_queues: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.manual_wal_flush: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.wal_compression: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.atomic_flush: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.log_readahead_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.allow_data_in_errors: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.db_host_id: __hostname__
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.max_background_jobs: 4
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.max_background_compactions: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.max_subcompactions: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.max_open_files: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.bytes_per_sync: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.max_background_flushes: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Compression algorithms supported:
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         kZSTD supported: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         kXpressCompression supported: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         kBZip2Compression supported: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         kLZ4Compression supported: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         kZlibCompression supported: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         kLZ4HCCompression supported: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         kSnappyCompression supported: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560d7e70f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560d7e70f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560d7e70f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560d7e70f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560d7e70f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560d7e70f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560d7e70f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560d7e70fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560d7e70fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560d7e70fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 741a03b1-6978-4571-936f-6d904f940f62
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957468418481, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957468419670, "job": 1, "event": "recovery_finished"}
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: freelist init
Feb 01 14:51:08 compute-0 ceph-osd[88066]: freelist _read_cfg
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluefs umount
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7f51b800 /var/lib/ceph/osd/ceph-2/block) close
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7f51b800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7f51b800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7f51b800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bdev(0x560d7f51b800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluefs mount
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluefs mount shared_bdev_used = 27262976
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: RocksDB version: 7.9.2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Git sha 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: DB SUMMARY
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: DB Session ID:  FEEPM6SA8484YKKK65Q8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: CURRENT file:  CURRENT
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: IDENTITY file:  IDENTITY
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                         Options.error_if_exists: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.create_if_missing: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                         Options.paranoid_checks: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                                     Options.env: 0x560d7f936a80
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                                Options.info_log: 0x560d7f766a20
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_file_opening_threads: 16
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                              Options.statistics: (nil)
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.use_fsync: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.max_log_file_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                         Options.allow_fallocate: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.use_direct_reads: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.create_missing_column_families: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                              Options.db_log_dir: 
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                                 Options.wal_dir: db.wal
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.advise_random_on_open: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.write_buffer_manager: 0x560d7f60b900
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                            Options.rate_limiter: (nil)
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.unordered_write: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.row_cache: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                              Options.wal_filter: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.allow_ingest_behind: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.two_write_queues: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.manual_wal_flush: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.wal_compression: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.atomic_flush: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.log_readahead_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.allow_data_in_errors: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.db_host_id: __hostname__
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.max_background_jobs: 4
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.max_background_compactions: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.max_subcompactions: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.max_open_files: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.bytes_per_sync: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.max_background_flushes: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Compression algorithms supported:
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         kZSTD supported: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         kXpressCompression supported: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         kBZip2Compression supported: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         kLZ4Compression supported: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         kZlibCompression supported: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         kLZ4HCCompression supported: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         kSnappyCompression supported: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560d7e70f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560d7e70f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560d7e70f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560d7e70f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560d7e70f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560d7e70f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f766bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560d7e70f8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f7670c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560d7e70fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f7670c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560d7e70fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:           Options.merge_operator: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.compaction_filter_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.sst_partitioner_factory: None
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d7f7670c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560d7e70fa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.write_buffer_size: 16777216
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.max_write_buffer_number: 64
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.compression: LZ4
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.num_levels: 7
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.level: 32767
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.compression_opts.strategy: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                  Options.compression_opts.enabled: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.arena_block_size: 1048576
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.disable_auto_compactions: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.inplace_update_support: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.bloom_locality: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                    Options.max_successive_merges: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.paranoid_file_checks: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.force_consistency_checks: 1
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.report_bg_io_stats: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                               Options.ttl: 2592000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                       Options.enable_blob_files: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                           Options.min_blob_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                          Options.blob_file_size: 268435456
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb:                Options.blob_file_starting_level: 0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 741a03b1-6978-4571-936f-6d904f940f62
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957468459551, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957468466748, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957468, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "741a03b1-6978-4571-936f-6d904f940f62", "db_session_id": "FEEPM6SA8484YKKK65Q8", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957468481257, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957468, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "741a03b1-6978-4571-936f-6d904f940f62", "db_session_id": "FEEPM6SA8484YKKK65Q8", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957468484524, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957468, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "741a03b1-6978-4571-936f-6d904f940f62", "db_session_id": "FEEPM6SA8484YKKK65Q8", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957468498387, "job": 1, "event": "recovery_finished"}
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb 01 14:51:08 compute-0 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/133289609; not ready for session (expect reconnect)
Feb 01 14:51:08 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 01 14:51:08 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 01 14:51:08 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 01 14:51:08 compute-0 podman[88543]: 2026-02-01 14:51:08.525420972 +0000 UTC m=+0.036816281 container create 0d1d0a5e686a67d8b6203785ab19883e31d063b70055978fc8134a42a9b9261a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:08 compute-0 ceph-mon[75179]: purged_snaps scrub starts
Feb 01 14:51:08 compute-0 ceph-mon[75179]: purged_snaps scrub ok
Feb 01 14:51:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Feb 01 14:51:08 compute-0 ceph-mon[75179]: osdmap e11: 3 total, 1 up, 3 in
Feb 01 14:51:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 01 14:51:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:08 compute-0 ceph-mon[75179]: pgmap v27: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Feb 01 14:51:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:08 compute-0 systemd[1]: Started libpod-conmon-0d1d0a5e686a67d8b6203785ab19883e31d063b70055978fc8134a42a9b9261a.scope.
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560d7f94a000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: DB pointer 0x560d7f920000
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Feb 01 14:51:08 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 14:51:08 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70fa30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70fa30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70fa30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 01 14:51:08 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb 01 14:51:08 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb 01 14:51:08 compute-0 ceph-osd[88066]: _get_class not permitted to load lua
Feb 01 14:51:08 compute-0 ceph-osd[88066]: _get_class not permitted to load sdk
Feb 01 14:51:08 compute-0 ceph-osd[88066]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb 01 14:51:08 compute-0 ceph-osd[88066]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb 01 14:51:08 compute-0 ceph-osd[88066]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb 01 14:51:08 compute-0 ceph-osd[88066]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb 01 14:51:08 compute-0 ceph-osd[88066]: osd.2 0 load_pgs
Feb 01 14:51:08 compute-0 ceph-osd[88066]: osd.2 0 load_pgs opened 0 pgs
Feb 01 14:51:08 compute-0 ceph-osd[88066]: osd.2 0 log_to_monitors true
Feb 01 14:51:08 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2[88062]: 2026-02-01T14:51:08.581+0000 7fd7be31a8c0 -1 osd.2 0 log_to_monitors true
Feb 01 14:51:08 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Feb 01 14:51:08 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3752563045,v1:192.168.122.100:6811/3752563045]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Feb 01 14:51:08 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:08 compute-0 podman[88543]: 2026-02-01 14:51:08.505978187 +0000 UTC m=+0.017373526 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:08 compute-0 podman[88543]: 2026-02-01 14:51:08.611661966 +0000 UTC m=+0.123057285 container init 0d1d0a5e686a67d8b6203785ab19883e31d063b70055978fc8134a42a9b9261a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 01 14:51:08 compute-0 podman[88543]: 2026-02-01 14:51:08.616410787 +0000 UTC m=+0.127806106 container start 0d1d0a5e686a67d8b6203785ab19883e31d063b70055978fc8134a42a9b9261a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lamarr, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Feb 01 14:51:08 compute-0 podman[88543]: 2026-02-01 14:51:08.619459337 +0000 UTC m=+0.130854656 container attach 0d1d0a5e686a67d8b6203785ab19883e31d063b70055978fc8134a42a9b9261a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lamarr, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:08 compute-0 zen_lamarr[88562]: 167 167
Feb 01 14:51:08 compute-0 systemd[1]: libpod-0d1d0a5e686a67d8b6203785ab19883e31d063b70055978fc8134a42a9b9261a.scope: Deactivated successfully.
Feb 01 14:51:08 compute-0 podman[88543]: 2026-02-01 14:51:08.620949401 +0000 UTC m=+0.132344720 container died 0d1d0a5e686a67d8b6203785ab19883e31d063b70055978fc8134a42a9b9261a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lamarr, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-49c6ea1f772a83027904f5048c4561792fc5b1bf0663ea00d49c7d9b918dba3f-merged.mount: Deactivated successfully.
Feb 01 14:51:08 compute-0 podman[88543]: 2026-02-01 14:51:08.677378853 +0000 UTC m=+0.188774172 container remove 0d1d0a5e686a67d8b6203785ab19883e31d063b70055978fc8134a42a9b9261a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:08 compute-0 systemd[1]: libpod-conmon-0d1d0a5e686a67d8b6203785ab19883e31d063b70055978fc8134a42a9b9261a.scope: Deactivated successfully.
Feb 01 14:51:08 compute-0 podman[88618]: 2026-02-01 14:51:08.774645873 +0000 UTC m=+0.034938995 container create 4d83e608cf78b10bca3c29ce819186aac9b6f30332d33e86987eb8dd446c7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_meninsky, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 01 14:51:08 compute-0 systemd[1]: Started libpod-conmon-4d83e608cf78b10bca3c29ce819186aac9b6f30332d33e86987eb8dd446c7b01.scope.
Feb 01 14:51:08 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9502b20db9edab9e810f609d44412e9fa90dfba97f2997b2376c2899320d24f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9502b20db9edab9e810f609d44412e9fa90dfba97f2997b2376c2899320d24f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9502b20db9edab9e810f609d44412e9fa90dfba97f2997b2376c2899320d24f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9502b20db9edab9e810f609d44412e9fa90dfba97f2997b2376c2899320d24f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:08 compute-0 podman[88618]: 2026-02-01 14:51:08.851216631 +0000 UTC m=+0.111509743 container init 4d83e608cf78b10bca3c29ce819186aac9b6f30332d33e86987eb8dd446c7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_meninsky, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 01 14:51:08 compute-0 ceph-osd[87011]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 47.525 iops: 12166.306 elapsed_sec: 0.247
Feb 01 14:51:08 compute-0 ceph-osd[87011]: log_channel(cluster) log [WRN] : OSD bench result of 12166.306450 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 01 14:51:08 compute-0 ceph-osd[87011]: osd.1 0 waiting for initial osdmap
Feb 01 14:51:08 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1[87007]: 2026-02-01T14:51:08.851+0000 7fe8a80e4640 -1 osd.1 0 waiting for initial osdmap
Feb 01 14:51:08 compute-0 podman[88618]: 2026-02-01 14:51:08.758556157 +0000 UTC m=+0.018849299 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:08 compute-0 podman[88618]: 2026-02-01 14:51:08.855948411 +0000 UTC m=+0.116241523 container start 4d83e608cf78b10bca3c29ce819186aac9b6f30332d33e86987eb8dd446c7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_meninsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 01 14:51:08 compute-0 ceph-osd[87011]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb 01 14:51:08 compute-0 ceph-osd[87011]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Feb 01 14:51:08 compute-0 ceph-osd[87011]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb 01 14:51:08 compute-0 ceph-osd[87011]: osd.1 11 check_osdmap_features require_osd_release unknown -> tentacle
Feb 01 14:51:08 compute-0 podman[88618]: 2026-02-01 14:51:08.860516086 +0000 UTC m=+0.120809198 container attach 4d83e608cf78b10bca3c29ce819186aac9b6f30332d33e86987eb8dd446c7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:08 compute-0 ceph-osd[87011]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 01 14:51:08 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-1[87007]: 2026-02-01T14:51:08.874+0000 7fe8a26d7640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 01 14:51:08 compute-0 ceph-osd[87011]: osd.1 11 set_numa_affinity not setting numa affinity
Feb 01 14:51:08 compute-0 ceph-osd[87011]: osd.1 11 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Feb 01 14:51:09 compute-0 lvm[88709]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 14:51:09 compute-0 lvm[88709]: VG ceph_vg0 finished
Feb 01 14:51:09 compute-0 lvm[88711]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 14:51:09 compute-0 lvm[88711]: VG ceph_vg1 finished
Feb 01 14:51:09 compute-0 lvm[88712]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 14:51:09 compute-0 lvm[88712]: VG ceph_vg2 finished
Feb 01 14:51:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 01 14:51:09 compute-0 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/133289609; not ready for session (expect reconnect)
Feb 01 14:51:09 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 01 14:51:09 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 01 14:51:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Feb 01 14:51:09 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3752563045,v1:192.168.122.100:6811/3752563045]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Feb 01 14:51:09 compute-0 frosty_meninsky[88634]: {}
Feb 01 14:51:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e12 e12: 3 total, 2 up, 3 in
Feb 01 14:51:09 compute-0 ceph-mon[75179]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/133289609,v1:192.168.122.100:6807/133289609] boot
Feb 01 14:51:09 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 2 up, 3 in
Feb 01 14:51:09 compute-0 ceph-osd[87011]: osd.1 12 state: booting -> active
Feb 01 14:51:09 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:51:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Feb 01 14:51:09 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3752563045,v1:192.168.122.100:6811/3752563045]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 01 14:51:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e12 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Feb 01 14:51:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 01 14:51:09 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 01 14:51:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 01 14:51:09 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:09 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 01 14:51:09 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 01 14:51:09 compute-0 ceph-mon[75179]: from='osd.2 [v2:192.168.122.100:6810/3752563045,v1:192.168.122.100:6811/3752563045]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Feb 01 14:51:09 compute-0 ceph-mon[75179]: OSD bench result of 12166.306450 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 01 14:51:09 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 01 14:51:09 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb 01 14:51:09 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb 01 14:51:09 compute-0 systemd[1]: libpod-4d83e608cf78b10bca3c29ce819186aac9b6f30332d33e86987eb8dd446c7b01.scope: Deactivated successfully.
Feb 01 14:51:09 compute-0 podman[88618]: 2026-02-01 14:51:09.587927538 +0000 UTC m=+0.848220660 container died 4d83e608cf78b10bca3c29ce819186aac9b6f30332d33e86987eb8dd446c7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9502b20db9edab9e810f609d44412e9fa90dfba97f2997b2376c2899320d24f-merged.mount: Deactivated successfully.
Feb 01 14:51:09 compute-0 podman[88618]: 2026-02-01 14:51:09.628255783 +0000 UTC m=+0.888548895 container remove 4d83e608cf78b10bca3c29ce819186aac9b6f30332d33e86987eb8dd446c7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:09 compute-0 systemd[1]: libpod-conmon-4d83e608cf78b10bca3c29ce819186aac9b6f30332d33e86987eb8dd446c7b01.scope: Deactivated successfully.
Feb 01 14:51:09 compute-0 sudo[88111]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:51:09 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:51:09 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v29: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Feb 01 14:51:09 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:09 compute-0 sudo[88727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 14:51:09 compute-0 sudo[88727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:09 compute-0 sudo[88727]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:09 compute-0 sudo[88752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:09 compute-0 sudo[88752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:09 compute-0 sudo[88752]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:09 compute-0 sudo[88777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 01 14:51:09 compute-0 sudo[88777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:10 compute-0 podman[88846]: 2026-02-01 14:51:10.267517663 +0000 UTC m=+0.063913664 container exec 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 01 14:51:10 compute-0 podman[88846]: 2026-02-01 14:51:10.377620224 +0000 UTC m=+0.174016225 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:51:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Feb 01 14:51:10 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3752563045,v1:192.168.122.100:6811/3752563045]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 01 14:51:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Feb 01 14:51:10 compute-0 ceph-osd[88066]: osd.2 0 done with init, starting boot process
Feb 01 14:51:10 compute-0 ceph-osd[88066]: osd.2 0 start_boot
Feb 01 14:51:10 compute-0 ceph-osd[88066]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb 01 14:51:10 compute-0 ceph-osd[88066]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb 01 14:51:10 compute-0 ceph-osd[88066]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb 01 14:51:10 compute-0 ceph-osd[88066]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb 01 14:51:10 compute-0 ceph-osd[88066]: osd.2 0  bench count 12288000 bsize 4 KiB
Feb 01 14:51:10 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Feb 01 14:51:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 01 14:51:10 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:10 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 01 14:51:10 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:51:10 compute-0 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3752563045; not ready for session (expect reconnect)
Feb 01 14:51:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 01 14:51:10 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:10 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 01 14:51:10 compute-0 ceph-mon[75179]: from='osd.2 [v2:192.168.122.100:6810/3752563045,v1:192.168.122.100:6811/3752563045]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Feb 01 14:51:10 compute-0 ceph-mon[75179]: osd.1 [v2:192.168.122.100:6806/133289609,v1:192.168.122.100:6807/133289609] boot
Feb 01 14:51:10 compute-0 ceph-mon[75179]: osdmap e12: 3 total, 2 up, 3 in
Feb 01 14:51:10 compute-0 ceph-mon[75179]: from='osd.2 [v2:192.168.122.100:6810/3752563045,v1:192.168.122.100:6811/3752563045]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 01 14:51:10 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 01 14:51:10 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:10 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:10 compute-0 ceph-mon[75179]: pgmap v29: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Feb 01 14:51:10 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:10 compute-0 ceph-mon[75179]: from='osd.2 [v2:192.168.122.100:6810/3752563045,v1:192.168.122.100:6811/3752563045]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 01 14:51:10 compute-0 ceph-mon[75179]: osdmap e13: 3 total, 2 up, 3 in
Feb 01 14:51:10 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:10 compute-0 ceph-mgr[75469]: [devicehealth INFO root] creating main.db for devicehealth
Feb 01 14:51:10 compute-0 ceph-mgr[75469]: [devicehealth INFO root] Check health
Feb 01 14:51:10 compute-0 ceph-mgr[75469]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Feb 01 14:51:10 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Feb 01 14:51:10 compute-0 sudo[89003]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Feb 01 14:51:10 compute-0 sudo[89003]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Feb 01 14:51:10 compute-0 sudo[89003]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Feb 01 14:51:10 compute-0 sudo[89003]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:10 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Feb 01 14:51:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb 01 14:51:10 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb 01 14:51:10 compute-0 sudo[88777]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:51:10 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:51:10 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:11 compute-0 sudo[89009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:11 compute-0 sudo[89009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:11 compute-0 sudo[89009]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:11 compute-0 sudo[89034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- inventory --format=json-pretty --filter-for-batch
Feb 01 14:51:11 compute-0 sudo[89034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:11 compute-0 podman[89072]: 2026-02-01 14:51:11.382875654 +0000 UTC m=+0.073773666 container create 8af13402f297d9338cfc57aa9be5f57079e3b35a1bcb612b228b296fdb2ce9f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_payne, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 01 14:51:11 compute-0 systemd[1]: Started libpod-conmon-8af13402f297d9338cfc57aa9be5f57079e3b35a1bcb612b228b296fdb2ce9f5.scope.
Feb 01 14:51:11 compute-0 podman[89072]: 2026-02-01 14:51:11.3520034 +0000 UTC m=+0.042901402 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:11 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Feb 01 14:51:11 compute-0 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3752563045; not ready for session (expect reconnect)
Feb 01 14:51:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 01 14:51:11 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:11 compute-0 podman[89072]: 2026-02-01 14:51:11.693694759 +0000 UTC m=+0.384592791 container init 8af13402f297d9338cfc57aa9be5f57079e3b35a1bcb612b228b296fdb2ce9f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_payne, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:51:11 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 01 14:51:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Feb 01 14:51:11 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v31: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Feb 01 14:51:11 compute-0 podman[89072]: 2026-02-01 14:51:11.703739277 +0000 UTC m=+0.394637289 container start 8af13402f297d9338cfc57aa9be5f57079e3b35a1bcb612b228b296fdb2ce9f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_payne, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 01 14:51:11 compute-0 loving_payne[89089]: 167 167
Feb 01 14:51:11 compute-0 systemd[1]: libpod-8af13402f297d9338cfc57aa9be5f57079e3b35a1bcb612b228b296fdb2ce9f5.scope: Deactivated successfully.
Feb 01 14:51:11 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Feb 01 14:51:11 compute-0 podman[89072]: 2026-02-01 14:51:11.718059891 +0000 UTC m=+0.408957983 container attach 8af13402f297d9338cfc57aa9be5f57079e3b35a1bcb612b228b296fdb2ce9f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_payne, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:11 compute-0 podman[89072]: 2026-02-01 14:51:11.718601327 +0000 UTC m=+0.409499339 container died 8af13402f297d9338cfc57aa9be5f57079e3b35a1bcb612b228b296fdb2ce9f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_payne, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 01 14:51:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 01 14:51:11 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:11 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 01 14:51:11 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:11 compute-0 ceph-mon[75179]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Feb 01 14:51:11 compute-0 ceph-mon[75179]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Feb 01 14:51:11 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb 01 14:51:11 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:11 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6370ca7a6c29c748457577c23f9874162631477c8c0f6fc672ed44e36a59898-merged.mount: Deactivated successfully.
Feb 01 14:51:11 compute-0 podman[89072]: 2026-02-01 14:51:11.817698332 +0000 UTC m=+0.508596314 container remove 8af13402f297d9338cfc57aa9be5f57079e3b35a1bcb612b228b296fdb2ce9f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:11 compute-0 systemd[1]: libpod-conmon-8af13402f297d9338cfc57aa9be5f57079e3b35a1bcb612b228b296fdb2ce9f5.scope: Deactivated successfully.
Feb 01 14:51:11 compute-0 podman[89114]: 2026-02-01 14:51:11.975625409 +0000 UTC m=+0.056273728 container create 7cc763834e6cc656086422f7e5f8d48b5aea9c6f11f4aa96caa785c2f3287f83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_chandrasekhar, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 01 14:51:12 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.viosrg(active, since 54s)
Feb 01 14:51:12 compute-0 podman[89114]: 2026-02-01 14:51:11.942824037 +0000 UTC m=+0.023472426 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:12 compute-0 systemd[1]: Started libpod-conmon-7cc763834e6cc656086422f7e5f8d48b5aea9c6f11f4aa96caa785c2f3287f83.scope.
Feb 01 14:51:12 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc20352ad425d010d0064e19a549d5a133d62041d25e7772df82ca56047c0fa8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc20352ad425d010d0064e19a549d5a133d62041d25e7772df82ca56047c0fa8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc20352ad425d010d0064e19a549d5a133d62041d25e7772df82ca56047c0fa8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc20352ad425d010d0064e19a549d5a133d62041d25e7772df82ca56047c0fa8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:12 compute-0 podman[89114]: 2026-02-01 14:51:12.120135138 +0000 UTC m=+0.200783457 container init 7cc763834e6cc656086422f7e5f8d48b5aea9c6f11f4aa96caa785c2f3287f83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_chandrasekhar, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 01 14:51:12 compute-0 podman[89114]: 2026-02-01 14:51:12.129883577 +0000 UTC m=+0.210531896 container start 7cc763834e6cc656086422f7e5f8d48b5aea9c6f11f4aa96caa785c2f3287f83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_chandrasekhar, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 01 14:51:12 compute-0 podman[89114]: 2026-02-01 14:51:12.146988824 +0000 UTC m=+0.227637183 container attach 7cc763834e6cc656086422f7e5f8d48b5aea9c6f11f4aa96caa785c2f3287f83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:12 compute-0 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3752563045; not ready for session (expect reconnect)
Feb 01 14:51:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 01 14:51:12 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:12 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]: [
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:     {
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:         "available": false,
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:         "being_replaced": false,
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:         "ceph_device_lvm": false,
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:         "device_id": "QEMU_DVD-ROM_QM00001",
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:         "lsm_data": {},
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:         "lvs": [],
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:         "path": "/dev/sr0",
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:         "rejected_reasons": [
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "Insufficient space (<5GB)",
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "Has a FileSystem"
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:         ],
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:         "sys_api": {
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "actuators": null,
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "device_nodes": [
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:                 "sr0"
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             ],
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "devname": "sr0",
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "human_readable_size": "482.00 KB",
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "id_bus": "ata",
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "model": "QEMU DVD-ROM",
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "nr_requests": "2",
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "parent": "/dev/sr0",
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "partitions": {},
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "path": "/dev/sr0",
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "removable": "1",
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "rev": "2.5+",
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "ro": "0",
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "rotational": "1",
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "sas_address": "",
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "sas_device_handle": "",
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "scheduler_mode": "mq-deadline",
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "sectors": 0,
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "sectorsize": "2048",
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "size": 493568.0,
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "support_discard": "2048",
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "type": "disk",
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:             "vendor": "QEMU"
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:         }
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]:     }
Feb 01 14:51:12 compute-0 determined_chandrasekhar[89130]: ]
Feb 01 14:51:12 compute-0 systemd[1]: libpod-7cc763834e6cc656086422f7e5f8d48b5aea9c6f11f4aa96caa785c2f3287f83.scope: Deactivated successfully.
Feb 01 14:51:12 compute-0 podman[89114]: 2026-02-01 14:51:12.713423238 +0000 UTC m=+0.794071587 container died 7cc763834e6cc656086422f7e5f8d48b5aea9c6f11f4aa96caa785c2f3287f83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 01 14:51:12 compute-0 ceph-mon[75179]: purged_snaps scrub starts
Feb 01 14:51:12 compute-0 ceph-mon[75179]: purged_snaps scrub ok
Feb 01 14:51:12 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:12 compute-0 ceph-mon[75179]: pgmap v31: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Feb 01 14:51:12 compute-0 ceph-mon[75179]: osdmap e14: 3 total, 2 up, 3 in
Feb 01 14:51:12 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:12 compute-0 ceph-mon[75179]: mgrmap e9: compute-0.viosrg(active, since 54s)
Feb 01 14:51:12 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc20352ad425d010d0064e19a549d5a133d62041d25e7772df82ca56047c0fa8-merged.mount: Deactivated successfully.
Feb 01 14:51:12 compute-0 podman[89114]: 2026-02-01 14:51:12.872670394 +0000 UTC m=+0.953318733 container remove 7cc763834e6cc656086422f7e5f8d48b5aea9c6f11f4aa96caa785c2f3287f83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_chandrasekhar, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 01 14:51:12 compute-0 systemd[1]: libpod-conmon-7cc763834e6cc656086422f7e5f8d48b5aea9c6f11f4aa96caa785c2f3287f83.scope: Deactivated successfully.
Feb 01 14:51:12 compute-0 sudo[89034]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:51:12 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:51:12 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Feb 01 14:51:12 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb 01 14:51:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Feb 01 14:51:12 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb 01 14:51:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Feb 01 14:51:12 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb 01 14:51:12 compute-0 ceph-mgr[75469]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43686k
Feb 01 14:51:12 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43686k
Feb 01 14:51:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Feb 01 14:51:12 compute-0 ceph-mgr[75469]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44734464: error parsing value: Value '44734464' is below minimum 939524096
Feb 01 14:51:12 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44734464: error parsing value: Value '44734464' is below minimum 939524096
Feb 01 14:51:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:51:12 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 14:51:12 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:51:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 14:51:13 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:13 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 14:51:13 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:51:13 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 14:51:13 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:51:13 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:51:13 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:13 compute-0 sudo[89933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:13 compute-0 sudo[89933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:13 compute-0 sudo[89933]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:13 compute-0 sudo[89958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 14:51:13 compute-0 sudo[89958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:13 compute-0 podman[89995]: 2026-02-01 14:51:13.384917515 +0000 UTC m=+0.044418687 container create 0eca7c609261897e1ddc7dba8fd5c67a6ef7a5dd1c93494e7bae6b59f082a6e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wiles, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:13 compute-0 systemd[1]: Started libpod-conmon-0eca7c609261897e1ddc7dba8fd5c67a6ef7a5dd1c93494e7bae6b59f082a6e5.scope.
Feb 01 14:51:13 compute-0 podman[89995]: 2026-02-01 14:51:13.359154692 +0000 UTC m=+0.018655854 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:13 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:13 compute-0 podman[89995]: 2026-02-01 14:51:13.502417924 +0000 UTC m=+0.161919176 container init 0eca7c609261897e1ddc7dba8fd5c67a6ef7a5dd1c93494e7bae6b59f082a6e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wiles, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:13 compute-0 podman[89995]: 2026-02-01 14:51:13.510114182 +0000 UTC m=+0.169615364 container start 0eca7c609261897e1ddc7dba8fd5c67a6ef7a5dd1c93494e7bae6b59f082a6e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wiles, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:51:13 compute-0 wonderful_wiles[90011]: 167 167
Feb 01 14:51:13 compute-0 systemd[1]: libpod-0eca7c609261897e1ddc7dba8fd5c67a6ef7a5dd1c93494e7bae6b59f082a6e5.scope: Deactivated successfully.
Feb 01 14:51:13 compute-0 podman[89995]: 2026-02-01 14:51:13.533278918 +0000 UTC m=+0.192780150 container attach 0eca7c609261897e1ddc7dba8fd5c67a6ef7a5dd1c93494e7bae6b59f082a6e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wiles, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 01 14:51:13 compute-0 podman[89995]: 2026-02-01 14:51:13.533792163 +0000 UTC m=+0.193293375 container died 0eca7c609261897e1ddc7dba8fd5c67a6ef7a5dd1c93494e7bae6b59f082a6e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 01 14:51:13 compute-0 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3752563045; not ready for session (expect reconnect)
Feb 01 14:51:13 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 01 14:51:13 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:13 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 01 14:51:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-aed99cf3cf07a4bb6817de6419fac5cf48aee237e69325b624ff8968be215f61-merged.mount: Deactivated successfully.
Feb 01 14:51:13 compute-0 podman[89995]: 2026-02-01 14:51:13.673586623 +0000 UTC m=+0.333087775 container remove 0eca7c609261897e1ddc7dba8fd5c67a6ef7a5dd1c93494e7bae6b59f082a6e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Feb 01 14:51:13 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v33: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Feb 01 14:51:13 compute-0 systemd[1]: libpod-conmon-0eca7c609261897e1ddc7dba8fd5c67a6ef7a5dd1c93494e7bae6b59f082a6e5.scope: Deactivated successfully.
Feb 01 14:51:13 compute-0 podman[90036]: 2026-02-01 14:51:13.821850703 +0000 UTC m=+0.053818375 container create 2092b60e3ee00bdda2d4f58576b70fd06268ba8f092283bb6cc0cb4b021c567f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_banach, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 01 14:51:13 compute-0 systemd[1]: Started libpod-conmon-2092b60e3ee00bdda2d4f58576b70fd06268ba8f092283bb6cc0cb4b021c567f.scope.
Feb 01 14:51:13 compute-0 podman[90036]: 2026-02-01 14:51:13.792227276 +0000 UTC m=+0.024195038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:13 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25b5cb575aabc9ef8741d2f8179ac42767d16bdafe2ea67899d1d2dec66828c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25b5cb575aabc9ef8741d2f8179ac42767d16bdafe2ea67899d1d2dec66828c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25b5cb575aabc9ef8741d2f8179ac42767d16bdafe2ea67899d1d2dec66828c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25b5cb575aabc9ef8741d2f8179ac42767d16bdafe2ea67899d1d2dec66828c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25b5cb575aabc9ef8741d2f8179ac42767d16bdafe2ea67899d1d2dec66828c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:13 compute-0 podman[90036]: 2026-02-01 14:51:13.911769026 +0000 UTC m=+0.143736748 container init 2092b60e3ee00bdda2d4f58576b70fd06268ba8f092283bb6cc0cb4b021c567f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_banach, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Feb 01 14:51:13 compute-0 podman[90036]: 2026-02-01 14:51:13.918157825 +0000 UTC m=+0.150125507 container start 2092b60e3ee00bdda2d4f58576b70fd06268ba8f092283bb6cc0cb4b021c567f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_banach, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 01 14:51:13 compute-0 podman[90036]: 2026-02-01 14:51:13.921993899 +0000 UTC m=+0.153961581 container attach 2092b60e3ee00bdda2d4f58576b70fd06268ba8f092283bb6cc0cb4b021c567f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 01 14:51:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb 01 14:51:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb 01 14:51:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb 01 14:51:13 compute-0 ceph-mon[75179]: Adjusting osd_memory_target on compute-0 to 43686k
Feb 01 14:51:13 compute-0 ceph-mon[75179]: Unable to set osd_memory_target on compute-0 to 44734464: error parsing value: Value '44734464' is below minimum 939524096
Feb 01 14:51:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:51:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:51:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:51:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:14 compute-0 ceph-osd[88066]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 36.731 iops: 9403.069 elapsed_sec: 0.319
Feb 01 14:51:14 compute-0 ceph-osd[88066]: log_channel(cluster) log [WRN] : OSD bench result of 9403.069102 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 01 14:51:14 compute-0 ceph-osd[88066]: osd.2 0 waiting for initial osdmap
Feb 01 14:51:14 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2[88062]: 2026-02-01T14:51:14.022+0000 7fd7baaae640 -1 osd.2 0 waiting for initial osdmap
Feb 01 14:51:14 compute-0 ceph-osd[88066]: osd.2 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb 01 14:51:14 compute-0 ceph-osd[88066]: osd.2 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Feb 01 14:51:14 compute-0 ceph-osd[88066]: osd.2 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb 01 14:51:14 compute-0 ceph-osd[88066]: osd.2 14 check_osdmap_features require_osd_release unknown -> tentacle
Feb 01 14:51:14 compute-0 ceph-osd[88066]: osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 01 14:51:14 compute-0 ceph-osd[88066]: osd.2 14 set_numa_affinity not setting numa affinity
Feb 01 14:51:14 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-osd-2[88062]: 2026-02-01T14:51:14.042+0000 7fd7b50a1640 -1 osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 01 14:51:14 compute-0 ceph-osd[88066]: osd.2 14 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Feb 01 14:51:14 compute-0 boring_banach[90052]: --> passed data devices: 0 physical, 3 LVM
Feb 01 14:51:14 compute-0 boring_banach[90052]: --> All data devices are unavailable
Feb 01 14:51:14 compute-0 systemd[1]: libpod-2092b60e3ee00bdda2d4f58576b70fd06268ba8f092283bb6cc0cb4b021c567f.scope: Deactivated successfully.
Feb 01 14:51:14 compute-0 podman[90036]: 2026-02-01 14:51:14.329279691 +0000 UTC m=+0.561247363 container died 2092b60e3ee00bdda2d4f58576b70fd06268ba8f092283bb6cc0cb4b021c567f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_banach, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 01 14:51:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c25b5cb575aabc9ef8741d2f8179ac42767d16bdafe2ea67899d1d2dec66828c-merged.mount: Deactivated successfully.
Feb 01 14:51:14 compute-0 podman[90036]: 2026-02-01 14:51:14.368464631 +0000 UTC m=+0.600432303 container remove 2092b60e3ee00bdda2d4f58576b70fd06268ba8f092283bb6cc0cb4b021c567f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_banach, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:14 compute-0 systemd[1]: libpod-conmon-2092b60e3ee00bdda2d4f58576b70fd06268ba8f092283bb6cc0cb4b021c567f.scope: Deactivated successfully.
Feb 01 14:51:14 compute-0 sudo[89958]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:14 compute-0 sudo[90082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:14 compute-0 sudo[90082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:14 compute-0 sudo[90082]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:14 compute-0 sudo[90107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 14:51:14 compute-0 sudo[90107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:14 compute-0 ceph-mgr[75469]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3752563045; not ready for session (expect reconnect)
Feb 01 14:51:14 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 01 14:51:14 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:14 compute-0 ceph-mgr[75469]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 01 14:51:14 compute-0 podman[90144]: 2026-02-01 14:51:14.744373894 +0000 UTC m=+0.045093577 container create 04ad1cf3f26c3576e579f367e0cefb1cc0838b74226775709b592fafe46e250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_blackwell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:14 compute-0 systemd[1]: Started libpod-conmon-04ad1cf3f26c3576e579f367e0cefb1cc0838b74226775709b592fafe46e250a.scope.
Feb 01 14:51:14 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:14 compute-0 podman[90144]: 2026-02-01 14:51:14.80941531 +0000 UTC m=+0.110135073 container init 04ad1cf3f26c3576e579f367e0cefb1cc0838b74226775709b592fafe46e250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_blackwell, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 01 14:51:14 compute-0 podman[90144]: 2026-02-01 14:51:14.716090246 +0000 UTC m=+0.016809939 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:14 compute-0 podman[90144]: 2026-02-01 14:51:14.816727446 +0000 UTC m=+0.117447139 container start 04ad1cf3f26c3576e579f367e0cefb1cc0838b74226775709b592fafe46e250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:51:14 compute-0 eloquent_blackwell[90160]: 167 167
Feb 01 14:51:14 compute-0 podman[90144]: 2026-02-01 14:51:14.820838458 +0000 UTC m=+0.121558161 container attach 04ad1cf3f26c3576e579f367e0cefb1cc0838b74226775709b592fafe46e250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_blackwell, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 01 14:51:14 compute-0 systemd[1]: libpod-04ad1cf3f26c3576e579f367e0cefb1cc0838b74226775709b592fafe46e250a.scope: Deactivated successfully.
Feb 01 14:51:14 compute-0 podman[90144]: 2026-02-01 14:51:14.82294083 +0000 UTC m=+0.123660493 container died 04ad1cf3f26c3576e579f367e0cefb1cc0838b74226775709b592fafe46e250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_blackwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:51:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c238ac3520da29391a0c26a6fa92b4a980f23eed82ec594634cf18cd01b79a13-merged.mount: Deactivated successfully.
Feb 01 14:51:14 compute-0 podman[90144]: 2026-02-01 14:51:14.868115368 +0000 UTC m=+0.168835031 container remove 04ad1cf3f26c3576e579f367e0cefb1cc0838b74226775709b592fafe46e250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_blackwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:51:14 compute-0 systemd[1]: libpod-conmon-04ad1cf3f26c3576e579f367e0cefb1cc0838b74226775709b592fafe46e250a.scope: Deactivated successfully.
Feb 01 14:51:15 compute-0 podman[90183]: 2026-02-01 14:51:15.012921716 +0000 UTC m=+0.053748172 container create 360037d16c2465164273687e89fba1e43349b0442b9010e45a5ee5cbdc3996ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 01 14:51:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Feb 01 14:51:15 compute-0 ceph-mon[75179]: pgmap v33: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Feb 01 14:51:15 compute-0 ceph-mon[75179]: OSD bench result of 9403.069102 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 01 14:51:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e15 e15: 3 total, 3 up, 3 in
Feb 01 14:51:15 compute-0 ceph-mon[75179]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/3752563045,v1:192.168.122.100:6811/3752563045] boot
Feb 01 14:51:15 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 3 up, 3 in
Feb 01 14:51:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 01 14:51:15 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:15 compute-0 ceph-osd[88066]: osd.2 15 state: booting -> active
Feb 01 14:51:15 compute-0 systemd[1]: Started libpod-conmon-360037d16c2465164273687e89fba1e43349b0442b9010e45a5ee5cbdc3996ac.scope.
Feb 01 14:51:15 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a175da107c4b889f71a213f598e4cd0e65e48516b81a9ac9e54d7d6081c51537/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a175da107c4b889f71a213f598e4cd0e65e48516b81a9ac9e54d7d6081c51537/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a175da107c4b889f71a213f598e4cd0e65e48516b81a9ac9e54d7d6081c51537/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a175da107c4b889f71a213f598e4cd0e65e48516b81a9ac9e54d7d6081c51537/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:15 compute-0 podman[90183]: 2026-02-01 14:51:14.988129102 +0000 UTC m=+0.028955628 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:15 compute-0 podman[90183]: 2026-02-01 14:51:15.089807833 +0000 UTC m=+0.130634289 container init 360037d16c2465164273687e89fba1e43349b0442b9010e45a5ee5cbdc3996ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:15 compute-0 podman[90183]: 2026-02-01 14:51:15.094490692 +0000 UTC m=+0.135317118 container start 360037d16c2465164273687e89fba1e43349b0442b9010e45a5ee5cbdc3996ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 01 14:51:15 compute-0 podman[90183]: 2026-02-01 14:51:15.097341297 +0000 UTC m=+0.138167793 container attach 360037d16c2465164273687e89fba1e43349b0442b9010e45a5ee5cbdc3996ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:51:15 compute-0 cranky_albattani[90199]: {
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:     "0": [
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:         {
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "devices": [
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "/dev/loop3"
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             ],
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "lv_name": "ceph_lv0",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "lv_size": "21470642176",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "name": "ceph_lv0",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "tags": {
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.cluster_name": "ceph",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.crush_device_class": "",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.encrypted": "0",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.objectstore": "bluestore",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.osd_id": "0",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.type": "block",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.vdo": "0",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.with_tpm": "0"
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             },
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "type": "block",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "vg_name": "ceph_vg0"
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:         }
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:     ],
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:     "1": [
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:         {
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "devices": [
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "/dev/loop4"
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             ],
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "lv_name": "ceph_lv1",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "lv_size": "21470642176",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "name": "ceph_lv1",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "tags": {
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.cluster_name": "ceph",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.crush_device_class": "",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.encrypted": "0",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.objectstore": "bluestore",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.osd_id": "1",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.type": "block",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.vdo": "0",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.with_tpm": "0"
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             },
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "type": "block",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "vg_name": "ceph_vg1"
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:         }
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:     ],
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:     "2": [
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:         {
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "devices": [
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "/dev/loop5"
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             ],
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "lv_name": "ceph_lv2",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "lv_size": "21470642176",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "name": "ceph_lv2",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "tags": {
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.cluster_name": "ceph",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.crush_device_class": "",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.encrypted": "0",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.objectstore": "bluestore",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.osd_id": "2",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.type": "block",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.vdo": "0",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:                 "ceph.with_tpm": "0"
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             },
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "type": "block",
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:             "vg_name": "ceph_vg2"
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:         }
Feb 01 14:51:15 compute-0 cranky_albattani[90199]:     ]
Feb 01 14:51:15 compute-0 cranky_albattani[90199]: }
Feb 01 14:51:15 compute-0 systemd[1]: libpod-360037d16c2465164273687e89fba1e43349b0442b9010e45a5ee5cbdc3996ac.scope: Deactivated successfully.
Feb 01 14:51:15 compute-0 podman[90183]: 2026-02-01 14:51:15.356980196 +0000 UTC m=+0.397806642 container died 360037d16c2465164273687e89fba1e43349b0442b9010e45a5ee5cbdc3996ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 01 14:51:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a175da107c4b889f71a213f598e4cd0e65e48516b81a9ac9e54d7d6081c51537-merged.mount: Deactivated successfully.
Feb 01 14:51:15 compute-0 podman[90183]: 2026-02-01 14:51:15.396849496 +0000 UTC m=+0.437675932 container remove 360037d16c2465164273687e89fba1e43349b0442b9010e45a5ee5cbdc3996ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:51:15 compute-0 systemd[1]: libpod-conmon-360037d16c2465164273687e89fba1e43349b0442b9010e45a5ee5cbdc3996ac.scope: Deactivated successfully.
Feb 01 14:51:15 compute-0 sudo[90107]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:15 compute-0 sudo[90218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:15 compute-0 sudo[90218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:15 compute-0 sudo[90218]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:15 compute-0 sudo[90243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 14:51:15 compute-0 sudo[90243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:15 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:15 compute-0 podman[90280]: 2026-02-01 14:51:15.801042326 +0000 UTC m=+0.042188910 container create 53b9974b133e23ef3cda0d9aa7ae75c4fbf630adef25832f525a2969a3404188 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_varahamihira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:15 compute-0 systemd[1]: Started libpod-conmon-53b9974b133e23ef3cda0d9aa7ae75c4fbf630adef25832f525a2969a3404188.scope.
Feb 01 14:51:15 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:15 compute-0 podman[90280]: 2026-02-01 14:51:15.866434163 +0000 UTC m=+0.107580787 container init 53b9974b133e23ef3cda0d9aa7ae75c4fbf630adef25832f525a2969a3404188 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_varahamihira, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:15 compute-0 podman[90280]: 2026-02-01 14:51:15.870486573 +0000 UTC m=+0.111633157 container start 53b9974b133e23ef3cda0d9aa7ae75c4fbf630adef25832f525a2969a3404188 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:15 compute-0 podman[90280]: 2026-02-01 14:51:15.874515902 +0000 UTC m=+0.115662536 container attach 53b9974b133e23ef3cda0d9aa7ae75c4fbf630adef25832f525a2969a3404188 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_varahamihira, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 01 14:51:15 compute-0 affectionate_varahamihira[90296]: 167 167
Feb 01 14:51:15 compute-0 podman[90280]: 2026-02-01 14:51:15.875526502 +0000 UTC m=+0.116673086 container died 53b9974b133e23ef3cda0d9aa7ae75c4fbf630adef25832f525a2969a3404188 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_varahamihira, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 01 14:51:15 compute-0 systemd[1]: libpod-53b9974b133e23ef3cda0d9aa7ae75c4fbf630adef25832f525a2969a3404188.scope: Deactivated successfully.
Feb 01 14:51:15 compute-0 podman[90280]: 2026-02-01 14:51:15.785150456 +0000 UTC m=+0.026297070 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-e104d42f97ed67a75a8a0a937e8b14849421d086e8e20ae31af9e4ce0cf3d080-merged.mount: Deactivated successfully.
Feb 01 14:51:15 compute-0 podman[90280]: 2026-02-01 14:51:15.905419918 +0000 UTC m=+0.146566502 container remove 53b9974b133e23ef3cda0d9aa7ae75c4fbf630adef25832f525a2969a3404188 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 01 14:51:15 compute-0 systemd[1]: libpod-conmon-53b9974b133e23ef3cda0d9aa7ae75c4fbf630adef25832f525a2969a3404188.scope: Deactivated successfully.
Feb 01 14:51:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Feb 01 14:51:16 compute-0 ceph-mon[75179]: osd.2 [v2:192.168.122.100:6810/3752563045,v1:192.168.122.100:6811/3752563045] boot
Feb 01 14:51:16 compute-0 ceph-mon[75179]: osdmap e15: 3 total, 3 up, 3 in
Feb 01 14:51:16 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 01 14:51:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Feb 01 14:51:16 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Feb 01 14:51:16 compute-0 podman[90321]: 2026-02-01 14:51:16.072549817 +0000 UTC m=+0.083574076 container create 0e4fff0ae4fb43a81911727f534fab2adf3bf25d2286c11768e2e085da6f12a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 01 14:51:16 compute-0 systemd[1]: Started libpod-conmon-0e4fff0ae4fb43a81911727f534fab2adf3bf25d2286c11768e2e085da6f12a7.scope.
Feb 01 14:51:16 compute-0 podman[90321]: 2026-02-01 14:51:16.045134285 +0000 UTC m=+0.056158614 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:16 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24123834e24e2a0bc12963d54af1d4438c34fffbe57eeb753d92d24b0b114c82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24123834e24e2a0bc12963d54af1d4438c34fffbe57eeb753d92d24b0b114c82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24123834e24e2a0bc12963d54af1d4438c34fffbe57eeb753d92d24b0b114c82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24123834e24e2a0bc12963d54af1d4438c34fffbe57eeb753d92d24b0b114c82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:16 compute-0 podman[90321]: 2026-02-01 14:51:16.169757666 +0000 UTC m=+0.180781975 container init 0e4fff0ae4fb43a81911727f534fab2adf3bf25d2286c11768e2e085da6f12a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 01 14:51:16 compute-0 podman[90321]: 2026-02-01 14:51:16.177026531 +0000 UTC m=+0.188050770 container start 0e4fff0ae4fb43a81911727f534fab2adf3bf25d2286c11768e2e085da6f12a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 01 14:51:16 compute-0 podman[90321]: 2026-02-01 14:51:16.180520614 +0000 UTC m=+0.191544933 container attach 0e4fff0ae4fb43a81911727f534fab2adf3bf25d2286c11768e2e085da6f12a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 01 14:51:16 compute-0 lvm[90414]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 14:51:16 compute-0 lvm[90414]: VG ceph_vg0 finished
Feb 01 14:51:16 compute-0 lvm[90417]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 14:51:16 compute-0 lvm[90417]: VG ceph_vg1 finished
Feb 01 14:51:16 compute-0 lvm[90419]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 14:51:16 compute-0 lvm[90419]: VG ceph_vg2 finished
Feb 01 14:51:16 compute-0 sweet_feistel[90338]: {}
Feb 01 14:51:16 compute-0 sudo[90445]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-welphpfcswcmwvrqkncppaerfaioyqxz ; /usr/bin/python3'
Feb 01 14:51:16 compute-0 systemd[1]: libpod-0e4fff0ae4fb43a81911727f534fab2adf3bf25d2286c11768e2e085da6f12a7.scope: Deactivated successfully.
Feb 01 14:51:16 compute-0 systemd[1]: libpod-0e4fff0ae4fb43a81911727f534fab2adf3bf25d2286c11768e2e085da6f12a7.scope: Consumed 1.070s CPU time.
Feb 01 14:51:16 compute-0 podman[90321]: 2026-02-01 14:51:16.970420507 +0000 UTC m=+0.981444796 container died 0e4fff0ae4fb43a81911727f534fab2adf3bf25d2286c11768e2e085da6f12a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 01 14:51:16 compute-0 sudo[90445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-24123834e24e2a0bc12963d54af1d4438c34fffbe57eeb753d92d24b0b114c82-merged.mount: Deactivated successfully.
Feb 01 14:51:17 compute-0 podman[90321]: 2026-02-01 14:51:17.023931852 +0000 UTC m=+1.034956121 container remove 0e4fff0ae4fb43a81911727f534fab2adf3bf25d2286c11768e2e085da6f12a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:51:17 compute-0 systemd[1]: libpod-conmon-0e4fff0ae4fb43a81911727f534fab2adf3bf25d2286c11768e2e085da6f12a7.scope: Deactivated successfully.
Feb 01 14:51:17 compute-0 ceph-mon[75179]: pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:17 compute-0 ceph-mon[75179]: osdmap e16: 3 total, 3 up, 3 in
Feb 01 14:51:17 compute-0 sudo[90243]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:51:17 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:51:17 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:17 compute-0 python3[90448]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:17 compute-0 sudo[90460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 14:51:17 compute-0 sudo[90460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:17 compute-0 sudo[90460]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:17 compute-0 podman[90484]: 2026-02-01 14:51:17.221265696 +0000 UTC m=+0.048553459 container create 7168ff27b874f9377e9fd01b4a707f3f760d1fa23c6575368e2c038e4a00b7c8 (image=quay.io/ceph/ceph:v20, name=flamboyant_chaum, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:17 compute-0 systemd[1]: Started libpod-conmon-7168ff27b874f9377e9fd01b4a707f3f760d1fa23c6575368e2c038e4a00b7c8.scope.
Feb 01 14:51:17 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33cb4f1010330b34ad95f20bac142cab83a4c7f65d7c0a809b33d3b635f976e7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33cb4f1010330b34ad95f20bac142cab83a4c7f65d7c0a809b33d3b635f976e7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33cb4f1010330b34ad95f20bac142cab83a4c7f65d7c0a809b33d3b635f976e7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:17 compute-0 podman[90484]: 2026-02-01 14:51:17.208538299 +0000 UTC m=+0.035826092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:17 compute-0 podman[90484]: 2026-02-01 14:51:17.328766329 +0000 UTC m=+0.156054202 container init 7168ff27b874f9377e9fd01b4a707f3f760d1fa23c6575368e2c038e4a00b7c8 (image=quay.io/ceph/ceph:v20, name=flamboyant_chaum, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:17 compute-0 podman[90484]: 2026-02-01 14:51:17.338669542 +0000 UTC m=+0.165957345 container start 7168ff27b874f9377e9fd01b4a707f3f760d1fa23c6575368e2c038e4a00b7c8 (image=quay.io/ceph/ceph:v20, name=flamboyant_chaum, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 01 14:51:17 compute-0 podman[90484]: 2026-02-01 14:51:17.342597658 +0000 UTC m=+0.169885511 container attach 7168ff27b874f9377e9fd01b4a707f3f760d1fa23c6575368e2c038e4a00b7c8 (image=quay.io/ceph/ceph:v20, name=flamboyant_chaum, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:17 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_14:51:17
Feb 01 14:51:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 14:51:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 14:51:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['.mgr']
Feb 01 14:51:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 14:51:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb 01 14:51:17 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4186673883' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 01 14:51:17 compute-0 flamboyant_chaum[90503]: 
Feb 01 14:51:17 compute-0 flamboyant_chaum[90503]: {"fsid":"2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":77,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":16,"num_osds":3,"num_up_osds":3,"osd_up_since":1769957475,"num_in_osds":3,"osd_in_since":1769957454,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":502886400,"bytes_avail":63909040128,"bytes_total":64411926528},"fsmap":{"epoch":1,"btime":"2026-02-01T14:49:58:117399+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-02-01T14:49:58.120892+0000","services":{}},"progress_events":{}}
Feb 01 14:51:17 compute-0 systemd[1]: libpod-7168ff27b874f9377e9fd01b4a707f3f760d1fa23c6575368e2c038e4a00b7c8.scope: Deactivated successfully.
Feb 01 14:51:17 compute-0 podman[90484]: 2026-02-01 14:51:17.818354318 +0000 UTC m=+0.645642091 container died 7168ff27b874f9377e9fd01b4a707f3f760d1fa23c6575368e2c038e4a00b7c8 (image=quay.io/ceph/ceph:v20, name=flamboyant_chaum, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-33cb4f1010330b34ad95f20bac142cab83a4c7f65d7c0a809b33d3b635f976e7-merged.mount: Deactivated successfully.
Feb 01 14:51:17 compute-0 podman[90484]: 2026-02-01 14:51:17.857668352 +0000 UTC m=+0.684956125 container remove 7168ff27b874f9377e9fd01b4a707f3f760d1fa23c6575368e2c038e4a00b7c8 (image=quay.io/ceph/ceph:v20, name=flamboyant_chaum, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 01 14:51:17 compute-0 systemd[1]: libpod-conmon-7168ff27b874f9377e9fd01b4a707f3f760d1fa23c6575368e2c038e4a00b7c8.scope: Deactivated successfully.
Feb 01 14:51:17 compute-0 sudo[90445]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:18 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:18 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:18 compute-0 ceph-mon[75179]: pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:18 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/4186673883' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 01 14:51:18 compute-0 sudo[90563]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gasjqjljtheapxnwpfiavuwfbdvogiuj ; /usr/bin/python3'
Feb 01 14:51:18 compute-0 sudo[90563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:18 compute-0 python3[90565]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:18 compute-0 podman[90566]: 2026-02-01 14:51:18.409288858 +0000 UTC m=+0.045400696 container create 4c52b31099f13bb6acf70d11f354fe1bb7d38983c22b2a1a3e5ceaf66b822cae (image=quay.io/ceph/ceph:v20, name=stoic_pare, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:51:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 14:51:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:51:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 14:51:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:51:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:51:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 14:51:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 14:51:18 compute-0 systemd[1]: Started libpod-conmon-4c52b31099f13bb6acf70d11f354fe1bb7d38983c22b2a1a3e5ceaf66b822cae.scope.
Feb 01 14:51:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:51:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:51:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:51:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:51:18 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99d5497146173086566ccd20a8843ed910f98eb6c3a38fac887723a0d5928e1b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99d5497146173086566ccd20a8843ed910f98eb6c3a38fac887723a0d5928e1b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:18 compute-0 podman[90566]: 2026-02-01 14:51:18.386447361 +0000 UTC m=+0.022559249 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:18 compute-0 podman[90566]: 2026-02-01 14:51:18.494630805 +0000 UTC m=+0.130742663 container init 4c52b31099f13bb6acf70d11f354fe1bb7d38983c22b2a1a3e5ceaf66b822cae (image=quay.io/ceph/ceph:v20, name=stoic_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:18 compute-0 podman[90566]: 2026-02-01 14:51:18.499028476 +0000 UTC m=+0.135140314 container start 4c52b31099f13bb6acf70d11f354fe1bb7d38983c22b2a1a3e5ceaf66b822cae (image=quay.io/ceph/ceph:v20, name=stoic_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 01 14:51:18 compute-0 podman[90566]: 2026-02-01 14:51:18.503248301 +0000 UTC m=+0.139360179 container attach 4c52b31099f13bb6acf70d11f354fe1bb7d38983c22b2a1a3e5ceaf66b822cae (image=quay.io/ceph/ceph:v20, name=stoic_pare, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 01 14:51:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 01 14:51:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/304218935' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 01 14:51:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Feb 01 14:51:19 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/304218935' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 01 14:51:19 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/304218935' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 01 14:51:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Feb 01 14:51:19 compute-0 stoic_pare[90582]: pool 'vms' created
Feb 01 14:51:19 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Feb 01 14:51:19 compute-0 systemd[1]: libpod-4c52b31099f13bb6acf70d11f354fe1bb7d38983c22b2a1a3e5ceaf66b822cae.scope: Deactivated successfully.
Feb 01 14:51:19 compute-0 podman[90566]: 2026-02-01 14:51:19.158814525 +0000 UTC m=+0.794926393 container died 4c52b31099f13bb6acf70d11f354fe1bb7d38983c22b2a1a3e5ceaf66b822cae (image=quay.io/ceph/ceph:v20, name=stoic_pare, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:51:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-99d5497146173086566ccd20a8843ed910f98eb6c3a38fac887723a0d5928e1b-merged.mount: Deactivated successfully.
Feb 01 14:51:19 compute-0 podman[90566]: 2026-02-01 14:51:19.203584671 +0000 UTC m=+0.839696509 container remove 4c52b31099f13bb6acf70d11f354fe1bb7d38983c22b2a1a3e5ceaf66b822cae (image=quay.io/ceph/ceph:v20, name=stoic_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 01 14:51:19 compute-0 systemd[1]: libpod-conmon-4c52b31099f13bb6acf70d11f354fe1bb7d38983c22b2a1a3e5ceaf66b822cae.scope: Deactivated successfully.
Feb 01 14:51:19 compute-0 sudo[90563]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:19 compute-0 sudo[90644]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gujvhtxenvkwadmmisudsmiplbhizigd ; /usr/bin/python3'
Feb 01 14:51:19 compute-0 sudo[90644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:19 compute-0 python3[90646]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:19 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:51:19 compute-0 podman[90647]: 2026-02-01 14:51:19.519130226 +0000 UTC m=+0.045992343 container create a8771c339793f8f2615b10f89c50675565664de803256b1dd236d7db29c89622 (image=quay.io/ceph/ceph:v20, name=friendly_rubin, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 01 14:51:19 compute-0 systemd[1]: Started libpod-conmon-a8771c339793f8f2615b10f89c50675565664de803256b1dd236d7db29c89622.scope.
Feb 01 14:51:19 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b207f6adef79aba507ca9e566ce1fc763c820eba44571638c868215dbd559e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b207f6adef79aba507ca9e566ce1fc763c820eba44571638c868215dbd559e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:19 compute-0 podman[90647]: 2026-02-01 14:51:19.495815325 +0000 UTC m=+0.022677442 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:19 compute-0 podman[90647]: 2026-02-01 14:51:19.598979381 +0000 UTC m=+0.125841508 container init a8771c339793f8f2615b10f89c50675565664de803256b1dd236d7db29c89622 (image=quay.io/ceph/ceph:v20, name=friendly_rubin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 01 14:51:19 compute-0 podman[90647]: 2026-02-01 14:51:19.60640439 +0000 UTC m=+0.133266477 container start a8771c339793f8f2615b10f89c50675565664de803256b1dd236d7db29c89622 (image=quay.io/ceph/ceph:v20, name=friendly_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 01 14:51:19 compute-0 podman[90647]: 2026-02-01 14:51:19.609975656 +0000 UTC m=+0.136837773 container attach a8771c339793f8f2615b10f89c50675565664de803256b1dd236d7db29c89622 (image=quay.io/ceph/ceph:v20, name=friendly_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:19 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v39: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 01 14:51:20 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/803876311' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 01 14:51:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Feb 01 14:51:20 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/304218935' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 01 14:51:20 compute-0 ceph-mon[75179]: osdmap e17: 3 total, 3 up, 3 in
Feb 01 14:51:20 compute-0 ceph-mon[75179]: pgmap v39: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:20 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/803876311' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 01 14:51:20 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/803876311' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 01 14:51:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Feb 01 14:51:20 compute-0 friendly_rubin[90662]: pool 'volumes' created
Feb 01 14:51:20 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Feb 01 14:51:20 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 18 pg[3.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:51:20 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:51:20 compute-0 systemd[1]: libpod-a8771c339793f8f2615b10f89c50675565664de803256b1dd236d7db29c89622.scope: Deactivated successfully.
Feb 01 14:51:20 compute-0 podman[90647]: 2026-02-01 14:51:20.16422067 +0000 UTC m=+0.691082747 container died a8771c339793f8f2615b10f89c50675565664de803256b1dd236d7db29c89622 (image=quay.io/ceph/ceph:v20, name=friendly_rubin, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 01 14:51:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-13b207f6adef79aba507ca9e566ce1fc763c820eba44571638c868215dbd559e-merged.mount: Deactivated successfully.
Feb 01 14:51:20 compute-0 podman[90647]: 2026-02-01 14:51:20.192186288 +0000 UTC m=+0.719048375 container remove a8771c339793f8f2615b10f89c50675565664de803256b1dd236d7db29c89622 (image=quay.io/ceph/ceph:v20, name=friendly_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:51:20 compute-0 sudo[90644]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:20 compute-0 systemd[1]: libpod-conmon-a8771c339793f8f2615b10f89c50675565664de803256b1dd236d7db29c89622.scope: Deactivated successfully.
Feb 01 14:51:20 compute-0 sudo[90724]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxifmruyboecvsvuvmuipkmsmiqzbcqg ; /usr/bin/python3'
Feb 01 14:51:20 compute-0 sudo[90724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:51:20 compute-0 python3[90726]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:20 compute-0 podman[90727]: 2026-02-01 14:51:20.49039987 +0000 UTC m=+0.049465846 container create 5e1516f550313ef7724434dca67420621fb3eba5af02882dcf620b83ed3d2d00 (image=quay.io/ceph/ceph:v20, name=hungry_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 01 14:51:20 compute-0 systemd[1]: Started libpod-conmon-5e1516f550313ef7724434dca67420621fb3eba5af02882dcf620b83ed3d2d00.scope.
Feb 01 14:51:20 compute-0 podman[90727]: 2026-02-01 14:51:20.464226285 +0000 UTC m=+0.023292321 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:20 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da76ee4adbc4a74f074a682762ce20522c1e2de735cf899d839b77e09996f969/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da76ee4adbc4a74f074a682762ce20522c1e2de735cf899d839b77e09996f969/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:20 compute-0 podman[90727]: 2026-02-01 14:51:20.578125678 +0000 UTC m=+0.137191654 container init 5e1516f550313ef7724434dca67420621fb3eba5af02882dcf620b83ed3d2d00 (image=quay.io/ceph/ceph:v20, name=hungry_kowalevski, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 01 14:51:20 compute-0 podman[90727]: 2026-02-01 14:51:20.585453545 +0000 UTC m=+0.144519551 container start 5e1516f550313ef7724434dca67420621fb3eba5af02882dcf620b83ed3d2d00 (image=quay.io/ceph/ceph:v20, name=hungry_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:20 compute-0 podman[90727]: 2026-02-01 14:51:20.589173215 +0000 UTC m=+0.148239201 container attach 5e1516f550313ef7724434dca67420621fb3eba5af02882dcf620b83ed3d2d00 (image=quay.io/ceph/ceph:v20, name=hungry_kowalevski, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 01 14:51:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 01 14:51:20 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3631477585' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 01 14:51:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Feb 01 14:51:21 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/803876311' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 01 14:51:21 compute-0 ceph-mon[75179]: osdmap e18: 3 total, 3 up, 3 in
Feb 01 14:51:21 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3631477585' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 01 14:51:21 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3631477585' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 01 14:51:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Feb 01 14:51:21 compute-0 hungry_kowalevski[90741]: pool 'backups' created
Feb 01 14:51:21 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Feb 01 14:51:21 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:51:21 compute-0 systemd[1]: libpod-5e1516f550313ef7724434dca67420621fb3eba5af02882dcf620b83ed3d2d00.scope: Deactivated successfully.
Feb 01 14:51:21 compute-0 podman[90727]: 2026-02-01 14:51:21.183086312 +0000 UTC m=+0.742152318 container died 5e1516f550313ef7724434dca67420621fb3eba5af02882dcf620b83ed3d2d00 (image=quay.io/ceph/ceph:v20, name=hungry_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 01 14:51:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-da76ee4adbc4a74f074a682762ce20522c1e2de735cf899d839b77e09996f969-merged.mount: Deactivated successfully.
Feb 01 14:51:21 compute-0 podman[90727]: 2026-02-01 14:51:21.220341916 +0000 UTC m=+0.779407882 container remove 5e1516f550313ef7724434dca67420621fb3eba5af02882dcf620b83ed3d2d00 (image=quay.io/ceph/ceph:v20, name=hungry_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:51:21 compute-0 systemd[1]: libpod-conmon-5e1516f550313ef7724434dca67420621fb3eba5af02882dcf620b83ed3d2d00.scope: Deactivated successfully.
Feb 01 14:51:21 compute-0 sudo[90724]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:21 compute-0 sudo[90803]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqfwkpullsuoliwxyarcshedpecvkorn ; /usr/bin/python3'
Feb 01 14:51:21 compute-0 sudo[90803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:21 compute-0 python3[90805]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:21 compute-0 podman[90806]: 2026-02-01 14:51:21.504510931 +0000 UTC m=+0.038018587 container create bd672e18a5a73ec8100e13a0f1aac1d2d5e9024f7a02db25d7672f1d85f1576b (image=quay.io/ceph/ceph:v20, name=stoic_nobel, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 01 14:51:21 compute-0 systemd[1]: Started libpod-conmon-bd672e18a5a73ec8100e13a0f1aac1d2d5e9024f7a02db25d7672f1d85f1576b.scope.
Feb 01 14:51:21 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37e5be2672977be384a2ba2d12c7724a6e94f65477c63c4537669a02ee89d91/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37e5be2672977be384a2ba2d12c7724a6e94f65477c63c4537669a02ee89d91/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:21 compute-0 podman[90806]: 2026-02-01 14:51:21.578769661 +0000 UTC m=+0.112277337 container init bd672e18a5a73ec8100e13a0f1aac1d2d5e9024f7a02db25d7672f1d85f1576b (image=quay.io/ceph/ceph:v20, name=stoic_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:21 compute-0 podman[90806]: 2026-02-01 14:51:21.486103266 +0000 UTC m=+0.019610942 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:21 compute-0 podman[90806]: 2026-02-01 14:51:21.582615544 +0000 UTC m=+0.116123200 container start bd672e18a5a73ec8100e13a0f1aac1d2d5e9024f7a02db25d7672f1d85f1576b (image=quay.io/ceph/ceph:v20, name=stoic_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 01 14:51:21 compute-0 podman[90806]: 2026-02-01 14:51:21.58582555 +0000 UTC m=+0.119333206 container attach bd672e18a5a73ec8100e13a0f1aac1d2d5e9024f7a02db25d7672f1d85f1576b (image=quay.io/ceph/ceph:v20, name=stoic_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 01 14:51:21 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:51:21 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v42: 4 pgs: 1 unknown, 1 creating+peering, 2 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 01 14:51:21 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/323587100' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 01 14:51:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Feb 01 14:51:22 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3631477585' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 01 14:51:22 compute-0 ceph-mon[75179]: osdmap e19: 3 total, 3 up, 3 in
Feb 01 14:51:22 compute-0 ceph-mon[75179]: pgmap v42: 4 pgs: 1 unknown, 1 creating+peering, 2 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:22 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/323587100' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 01 14:51:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/323587100' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 01 14:51:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Feb 01 14:51:22 compute-0 stoic_nobel[90821]: pool 'images' created
Feb 01 14:51:22 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Feb 01 14:51:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:51:22 compute-0 systemd[1]: libpod-bd672e18a5a73ec8100e13a0f1aac1d2d5e9024f7a02db25d7672f1d85f1576b.scope: Deactivated successfully.
Feb 01 14:51:22 compute-0 podman[90806]: 2026-02-01 14:51:22.20849426 +0000 UTC m=+0.742001916 container died bd672e18a5a73ec8100e13a0f1aac1d2d5e9024f7a02db25d7672f1d85f1576b (image=quay.io/ceph/ceph:v20, name=stoic_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-d37e5be2672977be384a2ba2d12c7724a6e94f65477c63c4537669a02ee89d91-merged.mount: Deactivated successfully.
Feb 01 14:51:22 compute-0 podman[90806]: 2026-02-01 14:51:22.240200799 +0000 UTC m=+0.773708455 container remove bd672e18a5a73ec8100e13a0f1aac1d2d5e9024f7a02db25d7672f1d85f1576b (image=quay.io/ceph/ceph:v20, name=stoic_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 01 14:51:22 compute-0 systemd[1]: libpod-conmon-bd672e18a5a73ec8100e13a0f1aac1d2d5e9024f7a02db25d7672f1d85f1576b.scope: Deactivated successfully.
Feb 01 14:51:22 compute-0 sudo[90803]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:51:22 compute-0 sudo[90884]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eodguyugnlcpvdaopiyigqcpanqctmsi ; /usr/bin/python3'
Feb 01 14:51:22 compute-0 sudo[90884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:22 compute-0 python3[90886]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:22 compute-0 podman[90887]: 2026-02-01 14:51:22.569089109 +0000 UTC m=+0.050942630 container create fd55b313e4d0f639e198bea8b527d204361d17a38b4ed303c4a75e849c09a018 (image=quay.io/ceph/ceph:v20, name=agitated_mclean, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 01 14:51:22 compute-0 systemd[1]: Started libpod-conmon-fd55b313e4d0f639e198bea8b527d204361d17a38b4ed303c4a75e849c09a018.scope.
Feb 01 14:51:22 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3bed56f6de90ad4ac6d375b69e25122501d1a930cb5b2c2a75246f56c870285/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3bed56f6de90ad4ac6d375b69e25122501d1a930cb5b2c2a75246f56c870285/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:22 compute-0 podman[90887]: 2026-02-01 14:51:22.546536111 +0000 UTC m=+0.028389672 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:22 compute-0 podman[90887]: 2026-02-01 14:51:22.646237823 +0000 UTC m=+0.128091404 container init fd55b313e4d0f639e198bea8b527d204361d17a38b4ed303c4a75e849c09a018 (image=quay.io/ceph/ceph:v20, name=agitated_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:22 compute-0 podman[90887]: 2026-02-01 14:51:22.650234532 +0000 UTC m=+0.132088013 container start fd55b313e4d0f639e198bea8b527d204361d17a38b4ed303c4a75e849c09a018 (image=quay.io/ceph/ceph:v20, name=agitated_mclean, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:22 compute-0 podman[90887]: 2026-02-01 14:51:22.654479627 +0000 UTC m=+0.136333128 container attach fd55b313e4d0f639e198bea8b527d204361d17a38b4ed303c4a75e849c09a018 (image=quay.io/ceph/ceph:v20, name=agitated_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 01 14:51:23 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 01 14:51:23 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3835473991' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 01 14:51:23 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Feb 01 14:51:23 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3835473991' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 01 14:51:23 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Feb 01 14:51:23 compute-0 agitated_mclean[90902]: pool 'cephfs.cephfs.meta' created
Feb 01 14:51:23 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 21 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:51:23 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Feb 01 14:51:23 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/323587100' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 01 14:51:23 compute-0 ceph-mon[75179]: osdmap e20: 3 total, 3 up, 3 in
Feb 01 14:51:23 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3835473991' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 01 14:51:23 compute-0 systemd[1]: libpod-fd55b313e4d0f639e198bea8b527d204361d17a38b4ed303c4a75e849c09a018.scope: Deactivated successfully.
Feb 01 14:51:23 compute-0 podman[90887]: 2026-02-01 14:51:23.21141147 +0000 UTC m=+0.693264961 container died fd55b313e4d0f639e198bea8b527d204361d17a38b4ed303c4a75e849c09a018 (image=quay.io/ceph/ceph:v20, name=agitated_mclean, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3bed56f6de90ad4ac6d375b69e25122501d1a930cb5b2c2a75246f56c870285-merged.mount: Deactivated successfully.
Feb 01 14:51:23 compute-0 podman[90887]: 2026-02-01 14:51:23.243882212 +0000 UTC m=+0.725735733 container remove fd55b313e4d0f639e198bea8b527d204361d17a38b4ed303c4a75e849c09a018 (image=quay.io/ceph/ceph:v20, name=agitated_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 01 14:51:23 compute-0 sudo[90884]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:23 compute-0 systemd[1]: libpod-conmon-fd55b313e4d0f639e198bea8b527d204361d17a38b4ed303c4a75e849c09a018.scope: Deactivated successfully.
Feb 01 14:51:23 compute-0 sudo[90963]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlmcebmjdwjjkahqewuhxsusnmwmblok ; /usr/bin/python3'
Feb 01 14:51:23 compute-0 sudo[90963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:23 compute-0 python3[90965]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:23 compute-0 podman[90966]: 2026-02-01 14:51:23.556060527 +0000 UTC m=+0.066456489 container create 270a21087bdf27c7fc12f1a2e62e9c9d278505629b93eb8becfef60d5bff1f9e (image=quay.io/ceph/ceph:v20, name=zen_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 01 14:51:23 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:51:23 compute-0 systemd[1]: Started libpod-conmon-270a21087bdf27c7fc12f1a2e62e9c9d278505629b93eb8becfef60d5bff1f9e.scope.
Feb 01 14:51:23 compute-0 podman[90966]: 2026-02-01 14:51:23.526704988 +0000 UTC m=+0.037101060 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:23 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4331c5c601c012f189a3b20c9ebb33cb49821f083f456698df96b91a5de8b3d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4331c5c601c012f189a3b20c9ebb33cb49821f083f456698df96b91a5de8b3d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:23 compute-0 podman[90966]: 2026-02-01 14:51:23.647395122 +0000 UTC m=+0.157791104 container init 270a21087bdf27c7fc12f1a2e62e9c9d278505629b93eb8becfef60d5bff1f9e (image=quay.io/ceph/ceph:v20, name=zen_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Feb 01 14:51:23 compute-0 podman[90966]: 2026-02-01 14:51:23.651354379 +0000 UTC m=+0.161750341 container start 270a21087bdf27c7fc12f1a2e62e9c9d278505629b93eb8becfef60d5bff1f9e (image=quay.io/ceph/ceph:v20, name=zen_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:23 compute-0 podman[90966]: 2026-02-01 14:51:23.654698778 +0000 UTC m=+0.165094770 container attach 270a21087bdf27c7fc12f1a2e62e9c9d278505629b93eb8becfef60d5bff1f9e (image=quay.io/ceph/ceph:v20, name=zen_grothendieck, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 01 14:51:23 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v45: 6 pgs: 3 unknown, 1 creating+peering, 2 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:24 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 01 14:51:24 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1252080328' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 01 14:51:24 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Feb 01 14:51:24 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1252080328' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 01 14:51:24 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Feb 01 14:51:24 compute-0 zen_grothendieck[90982]: pool 'cephfs.cephfs.data' created
Feb 01 14:51:24 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Feb 01 14:51:24 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:51:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:51:24 compute-0 systemd[1]: libpod-270a21087bdf27c7fc12f1a2e62e9c9d278505629b93eb8becfef60d5bff1f9e.scope: Deactivated successfully.
Feb 01 14:51:24 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3835473991' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 01 14:51:24 compute-0 ceph-mon[75179]: osdmap e21: 3 total, 3 up, 3 in
Feb 01 14:51:24 compute-0 ceph-mon[75179]: pgmap v45: 6 pgs: 3 unknown, 1 creating+peering, 2 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:24 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1252080328' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 01 14:51:24 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1252080328' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 01 14:51:24 compute-0 ceph-mon[75179]: osdmap e22: 3 total, 3 up, 3 in
Feb 01 14:51:24 compute-0 conmon[90982]: conmon 270a21087bdf27c7fc12 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-270a21087bdf27c7fc12f1a2e62e9c9d278505629b93eb8becfef60d5bff1f9e.scope/container/memory.events
Feb 01 14:51:24 compute-0 podman[91009]: 2026-02-01 14:51:24.258103468 +0000 UTC m=+0.022643192 container died 270a21087bdf27c7fc12f1a2e62e9c9d278505629b93eb8becfef60d5bff1f9e (image=quay.io/ceph/ceph:v20, name=zen_grothendieck, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 01 14:51:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4331c5c601c012f189a3b20c9ebb33cb49821f083f456698df96b91a5de8b3d0-merged.mount: Deactivated successfully.
Feb 01 14:51:24 compute-0 podman[91009]: 2026-02-01 14:51:24.294928809 +0000 UTC m=+0.059468503 container remove 270a21087bdf27c7fc12f1a2e62e9c9d278505629b93eb8becfef60d5bff1f9e (image=quay.io/ceph/ceph:v20, name=zen_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 01 14:51:24 compute-0 systemd[1]: libpod-conmon-270a21087bdf27c7fc12f1a2e62e9c9d278505629b93eb8becfef60d5bff1f9e.scope: Deactivated successfully.
Feb 01 14:51:24 compute-0 sudo[90963]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:24 compute-0 sudo[91046]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emyiskorkovkcoegmgwayqlsjegogmzq ; /usr/bin/python3'
Feb 01 14:51:24 compute-0 sudo[91046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:24 compute-0 python3[91048]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:24 compute-0 podman[91049]: 2026-02-01 14:51:24.680847786 +0000 UTC m=+0.044228070 container create 97d02dcef1946752eabd66a6d4de340152d6c427c415786e83835d8df504b329 (image=quay.io/ceph/ceph:v20, name=hungry_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 01 14:51:24 compute-0 systemd[1]: Started libpod-conmon-97d02dcef1946752eabd66a6d4de340152d6c427c415786e83835d8df504b329.scope.
Feb 01 14:51:24 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e7ab86aab5440f39a151375c05e086034d166ab067197ac23e91aa08e4b76d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e7ab86aab5440f39a151375c05e086034d166ab067197ac23e91aa08e4b76d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:24 compute-0 podman[91049]: 2026-02-01 14:51:24.663490972 +0000 UTC m=+0.026871276 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:24 compute-0 podman[91049]: 2026-02-01 14:51:24.773581593 +0000 UTC m=+0.136961957 container init 97d02dcef1946752eabd66a6d4de340152d6c427c415786e83835d8df504b329 (image=quay.io/ceph/ceph:v20, name=hungry_babbage, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:24 compute-0 podman[91049]: 2026-02-01 14:51:24.778160158 +0000 UTC m=+0.141540452 container start 97d02dcef1946752eabd66a6d4de340152d6c427c415786e83835d8df504b329 (image=quay.io/ceph/ceph:v20, name=hungry_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 01 14:51:24 compute-0 podman[91049]: 2026-02-01 14:51:24.781359013 +0000 UTC m=+0.144739337 container attach 97d02dcef1946752eabd66a6d4de340152d6c427c415786e83835d8df504b329 (image=quay.io/ceph/ceph:v20, name=hungry_babbage, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 01 14:51:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Feb 01 14:51:25 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/170303645' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Feb 01 14:51:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Feb 01 14:51:25 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/170303645' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Feb 01 14:51:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Feb 01 14:51:25 compute-0 hungry_babbage[91064]: enabled application 'rbd' on pool 'vms'
Feb 01 14:51:25 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Feb 01 14:51:25 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:51:25 compute-0 systemd[1]: libpod-97d02dcef1946752eabd66a6d4de340152d6c427c415786e83835d8df504b329.scope: Deactivated successfully.
Feb 01 14:51:25 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/170303645' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Feb 01 14:51:25 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/170303645' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Feb 01 14:51:25 compute-0 ceph-mon[75179]: osdmap e23: 3 total, 3 up, 3 in
Feb 01 14:51:25 compute-0 podman[91090]: 2026-02-01 14:51:25.265289035 +0000 UTC m=+0.031929567 container died 97d02dcef1946752eabd66a6d4de340152d6c427c415786e83835d8df504b329 (image=quay.io/ceph/ceph:v20, name=hungry_babbage, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6e7ab86aab5440f39a151375c05e086034d166ab067197ac23e91aa08e4b76d-merged.mount: Deactivated successfully.
Feb 01 14:51:25 compute-0 podman[91090]: 2026-02-01 14:51:25.299640172 +0000 UTC m=+0.066280674 container remove 97d02dcef1946752eabd66a6d4de340152d6c427c415786e83835d8df504b329 (image=quay.io/ceph/ceph:v20, name=hungry_babbage, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 01 14:51:25 compute-0 systemd[1]: libpod-conmon-97d02dcef1946752eabd66a6d4de340152d6c427c415786e83835d8df504b329.scope: Deactivated successfully.
Feb 01 14:51:25 compute-0 sudo[91046]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:51:25 compute-0 sudo[91128]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvuvicpzyuthxsyvqlepgagaskqirtjy ; /usr/bin/python3'
Feb 01 14:51:25 compute-0 sudo[91128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:25 compute-0 python3[91130]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:25 compute-0 podman[91131]: 2026-02-01 14:51:25.618527606 +0000 UTC m=+0.038319916 container create 7ab192a270a7504aea0225b6b4e5f9afdaea43f9749f60406ae27207da8bf16f (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 01 14:51:25 compute-0 systemd[1]: Started libpod-conmon-7ab192a270a7504aea0225b6b4e5f9afdaea43f9749f60406ae27207da8bf16f.scope.
Feb 01 14:51:25 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/141e26b99d29bb15c50d0a6afc1f8f0661a1efe5fa1e3754e04a80f519039271/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/141e26b99d29bb15c50d0a6afc1f8f0661a1efe5fa1e3754e04a80f519039271/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:25 compute-0 podman[91131]: 2026-02-01 14:51:25.683819539 +0000 UTC m=+0.103611859 container init 7ab192a270a7504aea0225b6b4e5f9afdaea43f9749f60406ae27207da8bf16f (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 01 14:51:25 compute-0 podman[91131]: 2026-02-01 14:51:25.688342943 +0000 UTC m=+0.108135273 container start 7ab192a270a7504aea0225b6b4e5f9afdaea43f9749f60406ae27207da8bf16f (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 01 14:51:25 compute-0 podman[91131]: 2026-02-01 14:51:25.691169487 +0000 UTC m=+0.110961817 container attach 7ab192a270a7504aea0225b6b4e5f9afdaea43f9749f60406ae27207da8bf16f (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 01 14:51:25 compute-0 podman[91131]: 2026-02-01 14:51:25.599149012 +0000 UTC m=+0.018941392 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:25 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v48: 7 pgs: 2 unknown, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Feb 01 14:51:26 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3126520532' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Feb 01 14:51:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Feb 01 14:51:26 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3126520532' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Feb 01 14:51:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Feb 01 14:51:26 compute-0 goofy_proskuriakova[91146]: enabled application 'rbd' on pool 'volumes'
Feb 01 14:51:26 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Feb 01 14:51:26 compute-0 systemd[1]: libpod-7ab192a270a7504aea0225b6b4e5f9afdaea43f9749f60406ae27207da8bf16f.scope: Deactivated successfully.
Feb 01 14:51:26 compute-0 podman[91131]: 2026-02-01 14:51:26.219110022 +0000 UTC m=+0.638902342 container died 7ab192a270a7504aea0225b6b4e5f9afdaea43f9749f60406ae27207da8bf16f (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 01 14:51:26 compute-0 ceph-mon[75179]: pgmap v48: 7 pgs: 2 unknown, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:26 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3126520532' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Feb 01 14:51:26 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3126520532' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Feb 01 14:51:26 compute-0 ceph-mon[75179]: osdmap e24: 3 total, 3 up, 3 in
Feb 01 14:51:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-141e26b99d29bb15c50d0a6afc1f8f0661a1efe5fa1e3754e04a80f519039271-merged.mount: Deactivated successfully.
Feb 01 14:51:26 compute-0 podman[91131]: 2026-02-01 14:51:26.253365897 +0000 UTC m=+0.673158217 container remove 7ab192a270a7504aea0225b6b4e5f9afdaea43f9749f60406ae27207da8bf16f (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Feb 01 14:51:26 compute-0 systemd[1]: libpod-conmon-7ab192a270a7504aea0225b6b4e5f9afdaea43f9749f60406ae27207da8bf16f.scope: Deactivated successfully.
Feb 01 14:51:26 compute-0 sudo[91128]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:26 compute-0 sudo[91206]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlyifncyuwyiirzlxvxcdlqheijgiqjz ; /usr/bin/python3'
Feb 01 14:51:26 compute-0 sudo[91206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:26 compute-0 python3[91208]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:26 compute-0 podman[91209]: 2026-02-01 14:51:26.531899485 +0000 UTC m=+0.043689354 container create aa4f4e75487149aa70ba9ddc27996f9dec60538902d226ecc0a7f21d55945e04 (image=quay.io/ceph/ceph:v20, name=gifted_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:26 compute-0 systemd[1]: Started libpod-conmon-aa4f4e75487149aa70ba9ddc27996f9dec60538902d226ecc0a7f21d55945e04.scope.
Feb 01 14:51:26 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d4ccb25f4e315d14e8b08854be04215980aedd0aa8790ec631d666b145a5eb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d4ccb25f4e315d14e8b08854be04215980aedd0aa8790ec631d666b145a5eb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:26 compute-0 podman[91209]: 2026-02-01 14:51:26.5978959 +0000 UTC m=+0.109685779 container init aa4f4e75487149aa70ba9ddc27996f9dec60538902d226ecc0a7f21d55945e04 (image=quay.io/ceph/ceph:v20, name=gifted_faraday, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:26 compute-0 podman[91209]: 2026-02-01 14:51:26.5087575 +0000 UTC m=+0.020547449 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:26 compute-0 podman[91209]: 2026-02-01 14:51:26.604592468 +0000 UTC m=+0.116382377 container start aa4f4e75487149aa70ba9ddc27996f9dec60538902d226ecc0a7f21d55945e04 (image=quay.io/ceph/ceph:v20, name=gifted_faraday, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 01 14:51:26 compute-0 podman[91209]: 2026-02-01 14:51:26.609013249 +0000 UTC m=+0.120803128 container attach aa4f4e75487149aa70ba9ddc27996f9dec60538902d226ecc0a7f21d55945e04 (image=quay.io/ceph/ceph:v20, name=gifted_faraday, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 01 14:51:27 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Feb 01 14:51:27 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2657114908' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Feb 01 14:51:27 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Feb 01 14:51:27 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2657114908' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Feb 01 14:51:27 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2657114908' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Feb 01 14:51:27 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Feb 01 14:51:27 compute-0 gifted_faraday[91225]: enabled application 'rbd' on pool 'backups'
Feb 01 14:51:27 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Feb 01 14:51:27 compute-0 systemd[1]: libpod-aa4f4e75487149aa70ba9ddc27996f9dec60538902d226ecc0a7f21d55945e04.scope: Deactivated successfully.
Feb 01 14:51:27 compute-0 podman[91209]: 2026-02-01 14:51:27.271652323 +0000 UTC m=+0.783442192 container died aa4f4e75487149aa70ba9ddc27996f9dec60538902d226ecc0a7f21d55945e04 (image=quay.io/ceph/ceph:v20, name=gifted_faraday, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 01 14:51:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0d4ccb25f4e315d14e8b08854be04215980aedd0aa8790ec631d666b145a5eb-merged.mount: Deactivated successfully.
Feb 01 14:51:27 compute-0 podman[91209]: 2026-02-01 14:51:27.310551135 +0000 UTC m=+0.822341014 container remove aa4f4e75487149aa70ba9ddc27996f9dec60538902d226ecc0a7f21d55945e04 (image=quay.io/ceph/ceph:v20, name=gifted_faraday, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 01 14:51:27 compute-0 systemd[1]: libpod-conmon-aa4f4e75487149aa70ba9ddc27996f9dec60538902d226ecc0a7f21d55945e04.scope: Deactivated successfully.
Feb 01 14:51:27 compute-0 sudo[91206]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:27 compute-0 sudo[91284]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvmaqtpjeyxjvpjjhcrmfnsqlnyzidrf ; /usr/bin/python3'
Feb 01 14:51:27 compute-0 sudo[91284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:27 compute-0 python3[91286]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:27 compute-0 podman[91287]: 2026-02-01 14:51:27.592478854 +0000 UTC m=+0.032820513 container create 1a05a9ed670ec62d45b59ac137bef4f0ed48372b12dd2d7682e62129dcc97f5a (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:27 compute-0 systemd[1]: Started libpod-conmon-1a05a9ed670ec62d45b59ac137bef4f0ed48372b12dd2d7682e62129dcc97f5a.scope.
Feb 01 14:51:27 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e490f8634d4deb479040b402c26c0d04d58784c750dcc5752857ac1862fca9a6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e490f8634d4deb479040b402c26c0d04d58784c750dcc5752857ac1862fca9a6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:27 compute-0 podman[91287]: 2026-02-01 14:51:27.652865813 +0000 UTC m=+0.093207552 container init 1a05a9ed670ec62d45b59ac137bef4f0ed48372b12dd2d7682e62129dcc97f5a (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 01 14:51:27 compute-0 podman[91287]: 2026-02-01 14:51:27.657687995 +0000 UTC m=+0.098029674 container start 1a05a9ed670ec62d45b59ac137bef4f0ed48372b12dd2d7682e62129dcc97f5a (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:27 compute-0 podman[91287]: 2026-02-01 14:51:27.660772407 +0000 UTC m=+0.101114076 container attach 1a05a9ed670ec62d45b59ac137bef4f0ed48372b12dd2d7682e62129dcc97f5a (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:27 compute-0 podman[91287]: 2026-02-01 14:51:27.576746548 +0000 UTC m=+0.017088227 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:27 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v51: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Feb 01 14:51:28 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/266627564' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Feb 01 14:51:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Feb 01 14:51:28 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2657114908' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Feb 01 14:51:28 compute-0 ceph-mon[75179]: osdmap e25: 3 total, 3 up, 3 in
Feb 01 14:51:28 compute-0 ceph-mon[75179]: pgmap v51: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:28 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/266627564' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Feb 01 14:51:28 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/266627564' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Feb 01 14:51:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Feb 01 14:51:28 compute-0 goofy_proskuriakova[91302]: enabled application 'rbd' on pool 'images'
Feb 01 14:51:28 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Feb 01 14:51:28 compute-0 systemd[1]: libpod-1a05a9ed670ec62d45b59ac137bef4f0ed48372b12dd2d7682e62129dcc97f5a.scope: Deactivated successfully.
Feb 01 14:51:28 compute-0 podman[91287]: 2026-02-01 14:51:28.285492667 +0000 UTC m=+0.725834326 container died 1a05a9ed670ec62d45b59ac137bef4f0ed48372b12dd2d7682e62129dcc97f5a (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-e490f8634d4deb479040b402c26c0d04d58784c750dcc5752857ac1862fca9a6-merged.mount: Deactivated successfully.
Feb 01 14:51:28 compute-0 podman[91287]: 2026-02-01 14:51:28.320246206 +0000 UTC m=+0.760587875 container remove 1a05a9ed670ec62d45b59ac137bef4f0ed48372b12dd2d7682e62129dcc97f5a (image=quay.io/ceph/ceph:v20, name=goofy_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Feb 01 14:51:28 compute-0 systemd[1]: libpod-conmon-1a05a9ed670ec62d45b59ac137bef4f0ed48372b12dd2d7682e62129dcc97f5a.scope: Deactivated successfully.
Feb 01 14:51:28 compute-0 sudo[91284]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:28 compute-0 sudo[91361]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnfeeuzknmtfdujaeumkqqbayjbajkfe ; /usr/bin/python3'
Feb 01 14:51:28 compute-0 sudo[91361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:28 compute-0 python3[91363]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:28 compute-0 podman[91364]: 2026-02-01 14:51:28.596419764 +0000 UTC m=+0.045727825 container create 293d6b96cb01632d98522b4bd29e704698c113a5ca9de875b3760e3b33feee8d (image=quay.io/ceph/ceph:v20, name=focused_liskov, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Feb 01 14:51:28 compute-0 systemd[1]: Started libpod-conmon-293d6b96cb01632d98522b4bd29e704698c113a5ca9de875b3760e3b33feee8d.scope.
Feb 01 14:51:28 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f667c51d262806c66edd602b2aaa2ea5dc7017748ac30070591f3fd81866d5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f667c51d262806c66edd602b2aaa2ea5dc7017748ac30070591f3fd81866d5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:28 compute-0 podman[91364]: 2026-02-01 14:51:28.655157464 +0000 UTC m=+0.104465625 container init 293d6b96cb01632d98522b4bd29e704698c113a5ca9de875b3760e3b33feee8d (image=quay.io/ceph/ceph:v20, name=focused_liskov, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:51:28 compute-0 podman[91364]: 2026-02-01 14:51:28.568334323 +0000 UTC m=+0.017642464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:28 compute-0 podman[91364]: 2026-02-01 14:51:28.665320725 +0000 UTC m=+0.114628776 container start 293d6b96cb01632d98522b4bd29e704698c113a5ca9de875b3760e3b33feee8d (image=quay.io/ceph/ceph:v20, name=focused_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 01 14:51:28 compute-0 podman[91364]: 2026-02-01 14:51:28.668281913 +0000 UTC m=+0.117589984 container attach 293d6b96cb01632d98522b4bd29e704698c113a5ca9de875b3760e3b33feee8d (image=quay.io/ceph/ceph:v20, name=focused_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 01 14:51:29 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Feb 01 14:51:29 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2817273916' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Feb 01 14:51:29 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Feb 01 14:51:29 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2817273916' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Feb 01 14:51:29 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Feb 01 14:51:29 compute-0 focused_liskov[91380]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Feb 01 14:51:29 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Feb 01 14:51:29 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/266627564' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Feb 01 14:51:29 compute-0 ceph-mon[75179]: osdmap e26: 3 total, 3 up, 3 in
Feb 01 14:51:29 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2817273916' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Feb 01 14:51:29 compute-0 systemd[1]: libpod-293d6b96cb01632d98522b4bd29e704698c113a5ca9de875b3760e3b33feee8d.scope: Deactivated successfully.
Feb 01 14:51:29 compute-0 podman[91364]: 2026-02-01 14:51:29.305921346 +0000 UTC m=+0.755229437 container died 293d6b96cb01632d98522b4bd29e704698c113a5ca9de875b3760e3b33feee8d (image=quay.io/ceph/ceph:v20, name=focused_liskov, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-03f667c51d262806c66edd602b2aaa2ea5dc7017748ac30070591f3fd81866d5-merged.mount: Deactivated successfully.
Feb 01 14:51:29 compute-0 podman[91364]: 2026-02-01 14:51:29.348504638 +0000 UTC m=+0.797812689 container remove 293d6b96cb01632d98522b4bd29e704698c113a5ca9de875b3760e3b33feee8d (image=quay.io/ceph/ceph:v20, name=focused_liskov, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:51:29 compute-0 systemd[1]: libpod-conmon-293d6b96cb01632d98522b4bd29e704698c113a5ca9de875b3760e3b33feee8d.scope: Deactivated successfully.
Feb 01 14:51:29 compute-0 sudo[91361]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:29 compute-0 sudo[91442]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqrbyaklxqyssomdqtmtbrmqfllxfcxj ; /usr/bin/python3'
Feb 01 14:51:29 compute-0 sudo[91442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:29 compute-0 python3[91444]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:29 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v54: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:29 compute-0 podman[91445]: 2026-02-01 14:51:29.749285687 +0000 UTC m=+0.058376420 container create 47c33e8372b259785f1b4dff05987ce49a7463a43ef2d874681f233aee50b873 (image=quay.io/ceph/ceph:v20, name=affectionate_merkle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 01 14:51:29 compute-0 systemd[1]: Started libpod-conmon-47c33e8372b259785f1b4dff05987ce49a7463a43ef2d874681f233aee50b873.scope.
Feb 01 14:51:29 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:29 compute-0 podman[91445]: 2026-02-01 14:51:29.722187444 +0000 UTC m=+0.031278247 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deb56ee10afa71e2a96acb5b95e317e32267dfc4973951cb355c9cfd5cb2b56e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deb56ee10afa71e2a96acb5b95e317e32267dfc4973951cb355c9cfd5cb2b56e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:29 compute-0 podman[91445]: 2026-02-01 14:51:29.838133498 +0000 UTC m=+0.147224241 container init 47c33e8372b259785f1b4dff05987ce49a7463a43ef2d874681f233aee50b873 (image=quay.io/ceph/ceph:v20, name=affectionate_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 01 14:51:29 compute-0 podman[91445]: 2026-02-01 14:51:29.845116814 +0000 UTC m=+0.154207557 container start 47c33e8372b259785f1b4dff05987ce49a7463a43ef2d874681f233aee50b873 (image=quay.io/ceph/ceph:v20, name=affectionate_merkle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 01 14:51:29 compute-0 podman[91445]: 2026-02-01 14:51:29.848069332 +0000 UTC m=+0.157160065 container attach 47c33e8372b259785f1b4dff05987ce49a7463a43ef2d874681f233aee50b873 (image=quay.io/ceph/ceph:v20, name=affectionate_merkle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 01 14:51:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Feb 01 14:51:30 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2177062579' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Feb 01 14:51:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Feb 01 14:51:30 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2177062579' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Feb 01 14:51:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Feb 01 14:51:30 compute-0 affectionate_merkle[91461]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Feb 01 14:51:30 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2817273916' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Feb 01 14:51:30 compute-0 ceph-mon[75179]: osdmap e27: 3 total, 3 up, 3 in
Feb 01 14:51:30 compute-0 ceph-mon[75179]: pgmap v54: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:30 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2177062579' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Feb 01 14:51:30 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Feb 01 14:51:30 compute-0 systemd[1]: libpod-47c33e8372b259785f1b4dff05987ce49a7463a43ef2d874681f233aee50b873.scope: Deactivated successfully.
Feb 01 14:51:30 compute-0 podman[91486]: 2026-02-01 14:51:30.344891155 +0000 UTC m=+0.019435506 container died 47c33e8372b259785f1b4dff05987ce49a7463a43ef2d874681f233aee50b873 (image=quay.io/ceph/ceph:v20, name=affectionate_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-deb56ee10afa71e2a96acb5b95e317e32267dfc4973951cb355c9cfd5cb2b56e-merged.mount: Deactivated successfully.
Feb 01 14:51:30 compute-0 podman[91486]: 2026-02-01 14:51:30.371750531 +0000 UTC m=+0.046294822 container remove 47c33e8372b259785f1b4dff05987ce49a7463a43ef2d874681f233aee50b873 (image=quay.io/ceph/ceph:v20, name=affectionate_merkle, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 01 14:51:30 compute-0 systemd[1]: libpod-conmon-47c33e8372b259785f1b4dff05987ce49a7463a43ef2d874681f233aee50b873.scope: Deactivated successfully.
Feb 01 14:51:30 compute-0 sudo[91442]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:51:31 compute-0 python3[91576]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:51:31 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2177062579' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Feb 01 14:51:31 compute-0 ceph-mon[75179]: osdmap e28: 3 total, 3 up, 3 in
Feb 01 14:51:31 compute-0 python3[91647]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769957490.942764-36514-17192024930168/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:51:31 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v56: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:32 compute-0 sudo[91747]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhrkhdwpsgqnschggygqbwglatnqvwnr ; /usr/bin/python3'
Feb 01 14:51:32 compute-0 sudo[91747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:32 compute-0 python3[91749]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:51:32 compute-0 sudo[91747]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:32 compute-0 ceph-mon[75179]: pgmap v56: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:32 compute-0 sudo[91822]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdakqcpepskbgghgeooddbvvsyiwkeoi ; /usr/bin/python3'
Feb 01 14:51:32 compute-0 sudo[91822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:32 compute-0 python3[91824]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769957491.895641-36528-243667670197410/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=e13ba4992094cac129dd8dc4109da05eb92e153b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:51:32 compute-0 sudo[91822]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:32 compute-0 sudo[91872]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxbfjwljrlktqzzkyqtgpketgqdyyhml ; /usr/bin/python3'
Feb 01 14:51:32 compute-0 sudo[91872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:32 compute-0 python3[91874]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:32 compute-0 podman[91875]: 2026-02-01 14:51:32.986594197 +0000 UTC m=+0.053129424 container create a9abb1c9ca3707fc0d44553a8284f86c257ab5228e20fcb7303ed062c1e9b594 (image=quay.io/ceph/ceph:v20, name=awesome_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:33 compute-0 systemd[1]: Started libpod-conmon-a9abb1c9ca3707fc0d44553a8284f86c257ab5228e20fcb7303ed062c1e9b594.scope.
Feb 01 14:51:33 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/752aa02ed622f77d64719cf7ab8c74238745467fc7f8f04240a000c448df54f5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/752aa02ed622f77d64719cf7ab8c74238745467fc7f8f04240a000c448df54f5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/752aa02ed622f77d64719cf7ab8c74238745467fc7f8f04240a000c448df54f5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:33 compute-0 podman[91875]: 2026-02-01 14:51:32.964269526 +0000 UTC m=+0.030804813 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:33 compute-0 podman[91875]: 2026-02-01 14:51:33.078596762 +0000 UTC m=+0.145132059 container init a9abb1c9ca3707fc0d44553a8284f86c257ab5228e20fcb7303ed062c1e9b594 (image=quay.io/ceph/ceph:v20, name=awesome_ellis, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 01 14:51:33 compute-0 podman[91875]: 2026-02-01 14:51:33.085211478 +0000 UTC m=+0.151746705 container start a9abb1c9ca3707fc0d44553a8284f86c257ab5228e20fcb7303ed062c1e9b594 (image=quay.io/ceph/ceph:v20, name=awesome_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 01 14:51:33 compute-0 podman[91875]: 2026-02-01 14:51:33.088453974 +0000 UTC m=+0.154989291 container attach a9abb1c9ca3707fc0d44553a8284f86c257ab5228e20fcb7303ed062c1e9b594 (image=quay.io/ceph/ceph:v20, name=awesome_ellis, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 01 14:51:33 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb 01 14:51:33 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/842859654' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 01 14:51:33 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/842859654' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb 01 14:51:33 compute-0 awesome_ellis[91890]: 
Feb 01 14:51:33 compute-0 awesome_ellis[91890]: [global]
Feb 01 14:51:33 compute-0 awesome_ellis[91890]:         fsid = 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:51:33 compute-0 awesome_ellis[91890]:         mon_host = 192.168.122.100
Feb 01 14:51:33 compute-0 awesome_ellis[91890]:         rgw_keystone_api_version = 3
Feb 01 14:51:33 compute-0 systemd[1]: libpod-a9abb1c9ca3707fc0d44553a8284f86c257ab5228e20fcb7303ed062c1e9b594.scope: Deactivated successfully.
Feb 01 14:51:33 compute-0 podman[91875]: 2026-02-01 14:51:33.502542967 +0000 UTC m=+0.569078214 container died a9abb1c9ca3707fc0d44553a8284f86c257ab5228e20fcb7303ed062c1e9b594 (image=quay.io/ceph/ceph:v20, name=awesome_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 01 14:51:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-752aa02ed622f77d64719cf7ab8c74238745467fc7f8f04240a000c448df54f5-merged.mount: Deactivated successfully.
Feb 01 14:51:33 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/842859654' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 01 14:51:33 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/842859654' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb 01 14:51:33 compute-0 sudo[91915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:33 compute-0 sudo[91915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:33 compute-0 sudo[91915]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:33 compute-0 podman[91875]: 2026-02-01 14:51:33.547598951 +0000 UTC m=+0.614134168 container remove a9abb1c9ca3707fc0d44553a8284f86c257ab5228e20fcb7303ed062c1e9b594 (image=quay.io/ceph/ceph:v20, name=awesome_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 01 14:51:33 compute-0 systemd[1]: libpod-conmon-a9abb1c9ca3707fc0d44553a8284f86c257ab5228e20fcb7303ed062c1e9b594.scope: Deactivated successfully.
Feb 01 14:51:33 compute-0 sudo[91872]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:33 compute-0 sudo[91952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 01 14:51:33 compute-0 sudo[91952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:33 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v57: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:33 compute-0 sudo[92000]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhkkyozybcjocnzhwyingoccpvytfeoa ; /usr/bin/python3'
Feb 01 14:51:33 compute-0 sudo[92000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:33 compute-0 python3[92002]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:33 compute-0 podman[92034]: 2026-02-01 14:51:33.929381478 +0000 UTC m=+0.049118716 container create b7c5b08558a3b3289444e538f61d4bf680c5440ecc3a0ce08914f56a83abf30b (image=quay.io/ceph/ceph:v20, name=great_dhawan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 01 14:51:33 compute-0 systemd[1]: Started libpod-conmon-b7c5b08558a3b3289444e538f61d4bf680c5440ecc3a0ce08914f56a83abf30b.scope.
Feb 01 14:51:33 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:33 compute-0 podman[92034]: 2026-02-01 14:51:33.901551314 +0000 UTC m=+0.021288602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c849922dae760c44e40791033daf2dc149a84b806044f4c1c56c3980597952/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c849922dae760c44e40791033daf2dc149a84b806044f4c1c56c3980597952/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c849922dae760c44e40791033daf2dc149a84b806044f4c1c56c3980597952/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:34 compute-0 podman[92034]: 2026-02-01 14:51:34.022679861 +0000 UTC m=+0.142417099 container init b7c5b08558a3b3289444e538f61d4bf680c5440ecc3a0ce08914f56a83abf30b (image=quay.io/ceph/ceph:v20, name=great_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Feb 01 14:51:34 compute-0 podman[92034]: 2026-02-01 14:51:34.026988388 +0000 UTC m=+0.146725586 container start b7c5b08558a3b3289444e538f61d4bf680c5440ecc3a0ce08914f56a83abf30b (image=quay.io/ceph/ceph:v20, name=great_dhawan, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:51:34 compute-0 podman[92034]: 2026-02-01 14:51:34.030336088 +0000 UTC m=+0.150073336 container attach b7c5b08558a3b3289444e538f61d4bf680c5440ecc3a0ce08914f56a83abf30b (image=quay.io/ceph/ceph:v20, name=great_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:34 compute-0 podman[92064]: 2026-02-01 14:51:34.03817355 +0000 UTC m=+0.073651953 container exec 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:34 compute-0 podman[92064]: 2026-02-01 14:51:34.148636461 +0000 UTC m=+0.184114844 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:34 compute-0 ceph-mon[75179]: pgmap v57: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:34 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Feb 01 14:51:34 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2639964740' entity='client.admin' 
Feb 01 14:51:34 compute-0 great_dhawan[92071]: set ssl_option
Feb 01 14:51:34 compute-0 systemd[1]: libpod-b7c5b08558a3b3289444e538f61d4bf680c5440ecc3a0ce08914f56a83abf30b.scope: Deactivated successfully.
Feb 01 14:51:34 compute-0 podman[92034]: 2026-02-01 14:51:34.590652082 +0000 UTC m=+0.710389290 container died b7c5b08558a3b3289444e538f61d4bf680c5440ecc3a0ce08914f56a83abf30b (image=quay.io/ceph/ceph:v20, name=great_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 01 14:51:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-88c849922dae760c44e40791033daf2dc149a84b806044f4c1c56c3980597952-merged.mount: Deactivated successfully.
Feb 01 14:51:34 compute-0 podman[92034]: 2026-02-01 14:51:34.623744032 +0000 UTC m=+0.743481230 container remove b7c5b08558a3b3289444e538f61d4bf680c5440ecc3a0ce08914f56a83abf30b (image=quay.io/ceph/ceph:v20, name=great_dhawan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 01 14:51:34 compute-0 systemd[1]: libpod-conmon-b7c5b08558a3b3289444e538f61d4bf680c5440ecc3a0ce08914f56a83abf30b.scope: Deactivated successfully.
Feb 01 14:51:34 compute-0 sudo[92000]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:34 compute-0 sudo[91952]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:34 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:51:34 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:34 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:51:34 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:34 compute-0 sudo[92247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:34 compute-0 sudo[92247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:34 compute-0 sudo[92247]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:34 compute-0 sudo[92299]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpomghysxcpmmqtxnlutjxjxuwvagqcc ; /usr/bin/python3'
Feb 01 14:51:34 compute-0 sudo[92299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:34 compute-0 sudo[92292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 14:51:34 compute-0 sudo[92292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:34 compute-0 python3[92314]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:34 compute-0 podman[92323]: 2026-02-01 14:51:34.94987711 +0000 UTC m=+0.038677256 container create c05ffbb854ac80feeb168d833fd02875d02be94a27c48a18f6d2a8cc6bbe4ed2 (image=quay.io/ceph/ceph:v20, name=pensive_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 01 14:51:34 compute-0 systemd[1]: Started libpod-conmon-c05ffbb854ac80feeb168d833fd02875d02be94a27c48a18f6d2a8cc6bbe4ed2.scope.
Feb 01 14:51:34 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732431039b2f41cf916124848bb32255847b7cb1366a51c3c4c613be1969a98a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732431039b2f41cf916124848bb32255847b7cb1366a51c3c4c613be1969a98a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732431039b2f41cf916124848bb32255847b7cb1366a51c3c4c613be1969a98a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:35 compute-0 podman[92323]: 2026-02-01 14:51:35.018363919 +0000 UTC m=+0.107164075 container init c05ffbb854ac80feeb168d833fd02875d02be94a27c48a18f6d2a8cc6bbe4ed2 (image=quay.io/ceph/ceph:v20, name=pensive_darwin, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:35 compute-0 podman[92323]: 2026-02-01 14:51:35.024290514 +0000 UTC m=+0.113090670 container start c05ffbb854ac80feeb168d833fd02875d02be94a27c48a18f6d2a8cc6bbe4ed2 (image=quay.io/ceph/ceph:v20, name=pensive_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:35 compute-0 podman[92323]: 2026-02-01 14:51:35.028152138 +0000 UTC m=+0.116952324 container attach c05ffbb854ac80feeb168d833fd02875d02be94a27c48a18f6d2a8cc6bbe4ed2 (image=quay.io/ceph/ceph:v20, name=pensive_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 01 14:51:35 compute-0 podman[92323]: 2026-02-01 14:51:34.932367872 +0000 UTC m=+0.021168058 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:35 compute-0 sudo[92292]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:51:35 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 14:51:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:51:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 14:51:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 14:51:35 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:51:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 14:51:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:51:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:51:35 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:35 compute-0 sudo[92393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:35 compute-0 sudo[92393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:35 compute-0 sudo[92393]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:35 compute-0 sudo[92418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 14:51:35 compute-0 sudo[92418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:51:35 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:51:35 compute-0 ceph-mgr[75469]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Feb 01 14:51:35 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Feb 01 14:51:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb 01 14:51:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:35 compute-0 pensive_darwin[92341]: Scheduled rgw.rgw update...
Feb 01 14:51:35 compute-0 systemd[1]: libpod-c05ffbb854ac80feeb168d833fd02875d02be94a27c48a18f6d2a8cc6bbe4ed2.scope: Deactivated successfully.
Feb 01 14:51:35 compute-0 podman[92323]: 2026-02-01 14:51:35.485207923 +0000 UTC m=+0.574008139 container died c05ffbb854ac80feeb168d833fd02875d02be94a27c48a18f6d2a8cc6bbe4ed2 (image=quay.io/ceph/ceph:v20, name=pensive_darwin, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-732431039b2f41cf916124848bb32255847b7cb1366a51c3c4c613be1969a98a-merged.mount: Deactivated successfully.
Feb 01 14:51:35 compute-0 podman[92323]: 2026-02-01 14:51:35.522336772 +0000 UTC m=+0.611136948 container remove c05ffbb854ac80feeb168d833fd02875d02be94a27c48a18f6d2a8cc6bbe4ed2 (image=quay.io/ceph/ceph:v20, name=pensive_darwin, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:35 compute-0 sudo[92299]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:35 compute-0 systemd[1]: libpod-conmon-c05ffbb854ac80feeb168d833fd02875d02be94a27c48a18f6d2a8cc6bbe4ed2.scope: Deactivated successfully.
Feb 01 14:51:35 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2639964740' entity='client.admin' 
Feb 01 14:51:35 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:35 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:35 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:35 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:51:35 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:35 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:51:35 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:51:35 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:35 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:35 compute-0 podman[92471]: 2026-02-01 14:51:35.581953428 +0000 UTC m=+0.038601224 container create 938e183aa91cf6ba539aa1ddf955915127979c2634170ea3e1179c1abfbcc3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_antonelli, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 01 14:51:35 compute-0 systemd[1]: Started libpod-conmon-938e183aa91cf6ba539aa1ddf955915127979c2634170ea3e1179c1abfbcc3fa.scope.
Feb 01 14:51:35 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:35 compute-0 podman[92471]: 2026-02-01 14:51:35.632845615 +0000 UTC m=+0.089493431 container init 938e183aa91cf6ba539aa1ddf955915127979c2634170ea3e1179c1abfbcc3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_antonelli, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:35 compute-0 podman[92471]: 2026-02-01 14:51:35.637497503 +0000 UTC m=+0.094145299 container start 938e183aa91cf6ba539aa1ddf955915127979c2634170ea3e1179c1abfbcc3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_antonelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 01 14:51:35 compute-0 agitated_antonelli[92488]: 167 167
Feb 01 14:51:35 compute-0 podman[92471]: 2026-02-01 14:51:35.640568634 +0000 UTC m=+0.097216470 container attach 938e183aa91cf6ba539aa1ddf955915127979c2634170ea3e1179c1abfbcc3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 01 14:51:35 compute-0 systemd[1]: libpod-938e183aa91cf6ba539aa1ddf955915127979c2634170ea3e1179c1abfbcc3fa.scope: Deactivated successfully.
Feb 01 14:51:35 compute-0 podman[92471]: 2026-02-01 14:51:35.64383065 +0000 UTC m=+0.100478466 container died 938e183aa91cf6ba539aa1ddf955915127979c2634170ea3e1179c1abfbcc3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:51:35 compute-0 podman[92471]: 2026-02-01 14:51:35.559477842 +0000 UTC m=+0.016125658 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-023880fe4a589d4bf6962a02dcab0f06085b284e71c6dfcce5401ec7cc2617a3-merged.mount: Deactivated successfully.
Feb 01 14:51:35 compute-0 podman[92471]: 2026-02-01 14:51:35.682850216 +0000 UTC m=+0.139498032 container remove 938e183aa91cf6ba539aa1ddf955915127979c2634170ea3e1179c1abfbcc3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 01 14:51:35 compute-0 systemd[1]: libpod-conmon-938e183aa91cf6ba539aa1ddf955915127979c2634170ea3e1179c1abfbcc3fa.scope: Deactivated successfully.
Feb 01 14:51:35 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v58: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:35 compute-0 podman[92511]: 2026-02-01 14:51:35.823625585 +0000 UTC m=+0.061357898 container create d1f119634cb8625b3d70175430d8acda76f2286858e3879d0cc027c8b2b568d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_dirac, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 01 14:51:35 compute-0 podman[92511]: 2026-02-01 14:51:35.797017657 +0000 UTC m=+0.034750020 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:35 compute-0 systemd[1]: Started libpod-conmon-d1f119634cb8625b3d70175430d8acda76f2286858e3879d0cc027c8b2b568d2.scope.
Feb 01 14:51:35 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a545440b575b242e28a14340f37e6a79dd59f7c65e2d2f2a6b295d404480ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a545440b575b242e28a14340f37e6a79dd59f7c65e2d2f2a6b295d404480ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a545440b575b242e28a14340f37e6a79dd59f7c65e2d2f2a6b295d404480ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a545440b575b242e28a14340f37e6a79dd59f7c65e2d2f2a6b295d404480ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a545440b575b242e28a14340f37e6a79dd59f7c65e2d2f2a6b295d404480ee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:35 compute-0 podman[92511]: 2026-02-01 14:51:35.949230215 +0000 UTC m=+0.186962578 container init d1f119634cb8625b3d70175430d8acda76f2286858e3879d0cc027c8b2b568d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:35 compute-0 podman[92511]: 2026-02-01 14:51:35.963805126 +0000 UTC m=+0.201537449 container start d1f119634cb8625b3d70175430d8acda76f2286858e3879d0cc027c8b2b568d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_dirac, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 01 14:51:35 compute-0 podman[92511]: 2026-02-01 14:51:35.968266518 +0000 UTC m=+0.205998841 container attach d1f119634cb8625b3d70175430d8acda76f2286858e3879d0cc027c8b2b568d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_dirac, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 01 14:51:36 compute-0 python3[92610]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:51:36 compute-0 confident_dirac[92527]: --> passed data devices: 0 physical, 3 LVM
Feb 01 14:51:36 compute-0 confident_dirac[92527]: --> All data devices are unavailable
Feb 01 14:51:36 compute-0 systemd[1]: libpod-d1f119634cb8625b3d70175430d8acda76f2286858e3879d0cc027c8b2b568d2.scope: Deactivated successfully.
Feb 01 14:51:36 compute-0 podman[92511]: 2026-02-01 14:51:36.465144843 +0000 UTC m=+0.702877156 container died d1f119634cb8625b3d70175430d8acda76f2286858e3879d0cc027c8b2b568d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_dirac, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-77a545440b575b242e28a14340f37e6a79dd59f7c65e2d2f2a6b295d404480ee-merged.mount: Deactivated successfully.
Feb 01 14:51:36 compute-0 podman[92511]: 2026-02-01 14:51:36.517791942 +0000 UTC m=+0.755524265 container remove d1f119634cb8625b3d70175430d8acda76f2286858e3879d0cc027c8b2b568d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_dirac, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Feb 01 14:51:36 compute-0 systemd[1]: libpod-conmon-d1f119634cb8625b3d70175430d8acda76f2286858e3879d0cc027c8b2b568d2.scope: Deactivated successfully.
Feb 01 14:51:36 compute-0 sudo[92418]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:36 compute-0 ceph-mon[75179]: from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:51:36 compute-0 ceph-mon[75179]: Saving service rgw.rgw spec with placement compute-0
Feb 01 14:51:36 compute-0 ceph-mon[75179]: pgmap v58: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:36 compute-0 sudo[92706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:36 compute-0 sudo[92706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:36 compute-0 sudo[92706]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:36 compute-0 python3[92700]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769957496.0966663-36569-87450757092027/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:51:36 compute-0 sudo[92731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 14:51:36 compute-0 sudo[92731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:36 compute-0 sudo[92817]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irxcntctxeratgbsgyqpqlcotzlsrlvu ; /usr/bin/python3'
Feb 01 14:51:36 compute-0 sudo[92817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:36 compute-0 podman[92812]: 2026-02-01 14:51:36.953604649 +0000 UTC m=+0.047103206 container create 286e559f489c6c5945df25b828ed4c386624b41132ab58a8dd4b42ebc079609c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 01 14:51:36 compute-0 systemd[1]: Started libpod-conmon-286e559f489c6c5945df25b828ed4c386624b41132ab58a8dd4b42ebc079609c.scope.
Feb 01 14:51:37 compute-0 python3[92824]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:37 compute-0 podman[92812]: 2026-02-01 14:51:36.938011897 +0000 UTC m=+0.031510494 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:37 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:37 compute-0 podman[92812]: 2026-02-01 14:51:37.05529096 +0000 UTC m=+0.148789517 container init 286e559f489c6c5945df25b828ed4c386624b41132ab58a8dd4b42ebc079609c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:37 compute-0 podman[92812]: 2026-02-01 14:51:37.062183504 +0000 UTC m=+0.155682061 container start 286e559f489c6c5945df25b828ed4c386624b41132ab58a8dd4b42ebc079609c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_margulis, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 01 14:51:37 compute-0 nice_margulis[92834]: 167 167
Feb 01 14:51:37 compute-0 systemd[1]: libpod-286e559f489c6c5945df25b828ed4c386624b41132ab58a8dd4b42ebc079609c.scope: Deactivated successfully.
Feb 01 14:51:37 compute-0 podman[92812]: 2026-02-01 14:51:37.065330388 +0000 UTC m=+0.158828945 container attach 286e559f489c6c5945df25b828ed4c386624b41132ab58a8dd4b42ebc079609c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 01 14:51:37 compute-0 podman[92812]: 2026-02-01 14:51:37.065527833 +0000 UTC m=+0.159026390 container died 286e559f489c6c5945df25b828ed4c386624b41132ab58a8dd4b42ebc079609c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_margulis, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 01 14:51:37 compute-0 podman[92837]: 2026-02-01 14:51:37.083729332 +0000 UTC m=+0.044506749 container create 72d5f1a0e3181520a07a134d37456437dec661bf6006200c8699d0b793d1a7e2 (image=quay.io/ceph/ceph:v20, name=kind_ptolemy, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 01 14:51:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b45aed60b961da83f98c0d514c260e4349ad08b57e6398ef538c22b9ae9cd4f9-merged.mount: Deactivated successfully.
Feb 01 14:51:37 compute-0 podman[92812]: 2026-02-01 14:51:37.105485217 +0000 UTC m=+0.198983814 container remove 286e559f489c6c5945df25b828ed4c386624b41132ab58a8dd4b42ebc079609c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:37 compute-0 systemd[1]: libpod-conmon-286e559f489c6c5945df25b828ed4c386624b41132ab58a8dd4b42ebc079609c.scope: Deactivated successfully.
Feb 01 14:51:37 compute-0 systemd[1]: Started libpod-conmon-72d5f1a0e3181520a07a134d37456437dec661bf6006200c8699d0b793d1a7e2.scope.
Feb 01 14:51:37 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8f30107e15c369db2b82f671cde88e2184dd8b3749607bfdb91eaa275d776f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8f30107e15c369db2b82f671cde88e2184dd8b3749607bfdb91eaa275d776f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8f30107e15c369db2b82f671cde88e2184dd8b3749607bfdb91eaa275d776f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:37 compute-0 podman[92837]: 2026-02-01 14:51:37.062558155 +0000 UTC m=+0.023335612 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:37 compute-0 podman[92837]: 2026-02-01 14:51:37.173359937 +0000 UTC m=+0.134137444 container init 72d5f1a0e3181520a07a134d37456437dec661bf6006200c8699d0b793d1a7e2 (image=quay.io/ceph/ceph:v20, name=kind_ptolemy, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 01 14:51:37 compute-0 podman[92837]: 2026-02-01 14:51:37.179119987 +0000 UTC m=+0.139897404 container start 72d5f1a0e3181520a07a134d37456437dec661bf6006200c8699d0b793d1a7e2 (image=quay.io/ceph/ceph:v20, name=kind_ptolemy, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 01 14:51:37 compute-0 podman[92837]: 2026-02-01 14:51:37.182045404 +0000 UTC m=+0.142822861 container attach 72d5f1a0e3181520a07a134d37456437dec661bf6006200c8699d0b793d1a7e2 (image=quay.io/ceph/ceph:v20, name=kind_ptolemy, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:37 compute-0 podman[92876]: 2026-02-01 14:51:37.247735859 +0000 UTC m=+0.049973391 container create 7b5c86720d53d07181f8284aed7d3f0cb6436054552f9755455a89650fae60ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 01 14:51:37 compute-0 systemd[1]: Started libpod-conmon-7b5c86720d53d07181f8284aed7d3f0cb6436054552f9755455a89650fae60ce.scope.
Feb 01 14:51:37 compute-0 podman[92876]: 2026-02-01 14:51:37.221801941 +0000 UTC m=+0.024039523 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:37 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfbaa9ad974d741bf65b021d1a031b17226a16f74239a4996ecd89f6758fd5a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfbaa9ad974d741bf65b021d1a031b17226a16f74239a4996ecd89f6758fd5a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfbaa9ad974d741bf65b021d1a031b17226a16f74239a4996ecd89f6758fd5a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfbaa9ad974d741bf65b021d1a031b17226a16f74239a4996ecd89f6758fd5a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:37 compute-0 podman[92876]: 2026-02-01 14:51:37.356443579 +0000 UTC m=+0.158681121 container init 7b5c86720d53d07181f8284aed7d3f0cb6436054552f9755455a89650fae60ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 01 14:51:37 compute-0 podman[92876]: 2026-02-01 14:51:37.364380264 +0000 UTC m=+0.166617766 container start 7b5c86720d53d07181f8284aed7d3f0cb6436054552f9755455a89650fae60ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 01 14:51:37 compute-0 podman[92876]: 2026-02-01 14:51:37.36797311 +0000 UTC m=+0.170210712 container attach 7b5c86720d53d07181f8284aed7d3f0cb6436054552f9755455a89650fae60ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 01 14:51:37 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:51:37 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Feb 01 14:51:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Feb 01 14:51:37 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Feb 01 14:51:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Feb 01 14:51:37 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Feb 01 14:51:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Feb 01 14:51:37 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Feb 01 14:51:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Feb 01 14:51:37 compute-0 ceph-mon[75179]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb 01 14:51:37 compute-0 ceph-mon[75179]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Feb 01 14:51:37 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0[75175]: 2026-02-01T14:51:37.584+0000 7f813d74e640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb 01 14:51:37 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Feb 01 14:51:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).mds e2 new map
Feb 01 14:51:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2026-02-01T14:51:37:585930+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-02-01T14:51:37.585458+0000
                                           modified        2026-02-01T14:51:37.585459+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Feb 01 14:51:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Feb 01 14:51:37 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Feb 01 14:51:37 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Feb 01 14:51:37 compute-0 ceph-mgr[75469]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Feb 01 14:51:37 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Feb 01 14:51:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb 01 14:51:37 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Feb 01 14:51:37 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Feb 01 14:51:37 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Feb 01 14:51:37 compute-0 ceph-mon[75179]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb 01 14:51:37 compute-0 ceph-mon[75179]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Feb 01 14:51:37 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Feb 01 14:51:37 compute-0 ceph-mon[75179]: osdmap e29: 3 total, 3 up, 3 in
Feb 01 14:51:37 compute-0 ceph-mon[75179]: fsmap cephfs:0
Feb 01 14:51:37 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:37 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Feb 01 14:51:37 compute-0 pensive_golick[92912]: {
Feb 01 14:51:37 compute-0 pensive_golick[92912]:     "0": [
Feb 01 14:51:37 compute-0 pensive_golick[92912]:         {
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "devices": [
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "/dev/loop3"
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             ],
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "lv_name": "ceph_lv0",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "lv_size": "21470642176",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "name": "ceph_lv0",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "tags": {
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.cluster_name": "ceph",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.crush_device_class": "",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.encrypted": "0",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.objectstore": "bluestore",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.osd_id": "0",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.type": "block",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.vdo": "0",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.with_tpm": "0"
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             },
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "type": "block",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "vg_name": "ceph_vg0"
Feb 01 14:51:37 compute-0 pensive_golick[92912]:         }
Feb 01 14:51:37 compute-0 pensive_golick[92912]:     ],
Feb 01 14:51:37 compute-0 pensive_golick[92912]:     "1": [
Feb 01 14:51:37 compute-0 pensive_golick[92912]:         {
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "devices": [
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "/dev/loop4"
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             ],
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "lv_name": "ceph_lv1",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "lv_size": "21470642176",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "name": "ceph_lv1",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "tags": {
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.cluster_name": "ceph",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.crush_device_class": "",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.encrypted": "0",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.objectstore": "bluestore",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.osd_id": "1",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.type": "block",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.vdo": "0",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.with_tpm": "0"
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             },
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "type": "block",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "vg_name": "ceph_vg1"
Feb 01 14:51:37 compute-0 pensive_golick[92912]:         }
Feb 01 14:51:37 compute-0 pensive_golick[92912]:     ],
Feb 01 14:51:37 compute-0 pensive_golick[92912]:     "2": [
Feb 01 14:51:37 compute-0 pensive_golick[92912]:         {
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "devices": [
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "/dev/loop5"
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             ],
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "lv_name": "ceph_lv2",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "lv_size": "21470642176",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "name": "ceph_lv2",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "tags": {
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.cluster_name": "ceph",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.crush_device_class": "",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.encrypted": "0",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.objectstore": "bluestore",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.osd_id": "2",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.type": "block",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.vdo": "0",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:                 "ceph.with_tpm": "0"
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             },
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "type": "block",
Feb 01 14:51:37 compute-0 pensive_golick[92912]:             "vg_name": "ceph_vg2"
Feb 01 14:51:37 compute-0 pensive_golick[92912]:         }
Feb 01 14:51:37 compute-0 pensive_golick[92912]:     ]
Feb 01 14:51:37 compute-0 pensive_golick[92912]: }
Feb 01 14:51:37 compute-0 systemd[1]: libpod-72d5f1a0e3181520a07a134d37456437dec661bf6006200c8699d0b793d1a7e2.scope: Deactivated successfully.
Feb 01 14:51:37 compute-0 podman[92837]: 2026-02-01 14:51:37.626776535 +0000 UTC m=+0.587553962 container died 72d5f1a0e3181520a07a134d37456437dec661bf6006200c8699d0b793d1a7e2 (image=quay.io/ceph/ceph:v20, name=kind_ptolemy, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 01 14:51:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-de8f30107e15c369db2b82f671cde88e2184dd8b3749607bfdb91eaa275d776f-merged.mount: Deactivated successfully.
Feb 01 14:51:37 compute-0 systemd[1]: libpod-7b5c86720d53d07181f8284aed7d3f0cb6436054552f9755455a89650fae60ce.scope: Deactivated successfully.
Feb 01 14:51:37 compute-0 podman[92837]: 2026-02-01 14:51:37.657309289 +0000 UTC m=+0.618086706 container remove 72d5f1a0e3181520a07a134d37456437dec661bf6006200c8699d0b793d1a7e2 (image=quay.io/ceph/ceph:v20, name=kind_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:51:37 compute-0 podman[92876]: 2026-02-01 14:51:37.659662538 +0000 UTC m=+0.461900060 container died 7b5c86720d53d07181f8284aed7d3f0cb6436054552f9755455a89650fae60ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 01 14:51:37 compute-0 sudo[92817]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:37 compute-0 systemd[1]: libpod-conmon-72d5f1a0e3181520a07a134d37456437dec661bf6006200c8699d0b793d1a7e2.scope: Deactivated successfully.
Feb 01 14:51:37 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v60: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:37 compute-0 podman[92876]: 2026-02-01 14:51:37.709468633 +0000 UTC m=+0.511706165 container remove 7b5c86720d53d07181f8284aed7d3f0cb6436054552f9755455a89650fae60ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:37 compute-0 systemd[1]: libpod-conmon-7b5c86720d53d07181f8284aed7d3f0cb6436054552f9755455a89650fae60ce.scope: Deactivated successfully.
Feb 01 14:51:37 compute-0 sudo[92731]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:37 compute-0 sudo[92994]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lngdxsbmbigzjzcfwafgxzisdbknzccu ; /usr/bin/python3'
Feb 01 14:51:37 compute-0 sudo[92994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:37 compute-0 sudo[92954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:37 compute-0 sudo[92954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:37 compute-0 sudo[92954]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:37 compute-0 sudo[93001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 14:51:37 compute-0 sudo[93001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfbaa9ad974d741bf65b021d1a031b17226a16f74239a4996ecd89f6758fd5a4-merged.mount: Deactivated successfully.
Feb 01 14:51:37 compute-0 python3[92999]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:38 compute-0 podman[93026]: 2026-02-01 14:51:38.03521659 +0000 UTC m=+0.049672372 container create e3a8ce8683ad8696c098e8d0a3962ee14094114d403afd91f671f9c361445c24 (image=quay.io/ceph/ceph:v20, name=cool_thompson, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 01 14:51:38 compute-0 systemd[1]: Started libpod-conmon-e3a8ce8683ad8696c098e8d0a3962ee14094114d403afd91f671f9c361445c24.scope.
Feb 01 14:51:38 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35fe62f2f994ae68bdbe3a1643b3429133b37ca57a2ec3b6d272fc2dcbb76727/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35fe62f2f994ae68bdbe3a1643b3429133b37ca57a2ec3b6d272fc2dcbb76727/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35fe62f2f994ae68bdbe3a1643b3429133b37ca57a2ec3b6d272fc2dcbb76727/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:38 compute-0 podman[93026]: 2026-02-01 14:51:38.015743954 +0000 UTC m=+0.030199836 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:38 compute-0 podman[93026]: 2026-02-01 14:51:38.11318983 +0000 UTC m=+0.127645572 container init e3a8ce8683ad8696c098e8d0a3962ee14094114d403afd91f671f9c361445c24 (image=quay.io/ceph/ceph:v20, name=cool_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 01 14:51:38 compute-0 podman[93026]: 2026-02-01 14:51:38.118087665 +0000 UTC m=+0.132543447 container start e3a8ce8683ad8696c098e8d0a3962ee14094114d403afd91f671f9c361445c24 (image=quay.io/ceph/ceph:v20, name=cool_thompson, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 01 14:51:38 compute-0 podman[93026]: 2026-02-01 14:51:38.121433834 +0000 UTC m=+0.135889596 container attach e3a8ce8683ad8696c098e8d0a3962ee14094114d403afd91f671f9c361445c24 (image=quay.io/ceph/ceph:v20, name=cool_thompson, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 01 14:51:38 compute-0 podman[93057]: 2026-02-01 14:51:38.16350325 +0000 UTC m=+0.064253124 container create a1a85b448d3370406b4848571c1e59d2b019bd6f19241dfbd3400597a0819b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bardeen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Feb 01 14:51:38 compute-0 systemd[1]: Started libpod-conmon-a1a85b448d3370406b4848571c1e59d2b019bd6f19241dfbd3400597a0819b4d.scope.
Feb 01 14:51:38 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:38 compute-0 podman[93057]: 2026-02-01 14:51:38.226046112 +0000 UTC m=+0.126796026 container init a1a85b448d3370406b4848571c1e59d2b019bd6f19241dfbd3400597a0819b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bardeen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:38 compute-0 podman[93057]: 2026-02-01 14:51:38.13683129 +0000 UTC m=+0.037581244 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:38 compute-0 podman[93057]: 2026-02-01 14:51:38.23137884 +0000 UTC m=+0.132128714 container start a1a85b448d3370406b4848571c1e59d2b019bd6f19241dfbd3400597a0819b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 01 14:51:38 compute-0 bold_bardeen[93074]: 167 167
Feb 01 14:51:38 compute-0 systemd[1]: libpod-a1a85b448d3370406b4848571c1e59d2b019bd6f19241dfbd3400597a0819b4d.scope: Deactivated successfully.
Feb 01 14:51:38 compute-0 podman[93057]: 2026-02-01 14:51:38.234283956 +0000 UTC m=+0.135033880 container attach a1a85b448d3370406b4848571c1e59d2b019bd6f19241dfbd3400597a0819b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bardeen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 01 14:51:38 compute-0 podman[93057]: 2026-02-01 14:51:38.2347572 +0000 UTC m=+0.135507084 container died a1a85b448d3370406b4848571c1e59d2b019bd6f19241dfbd3400597a0819b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8e9321d08d1c2ca3f4ddf1bb2d0cec2c64fbe9face3f8bf354aae1f8da9f8f4-merged.mount: Deactivated successfully.
Feb 01 14:51:38 compute-0 podman[93057]: 2026-02-01 14:51:38.268822999 +0000 UTC m=+0.169572873 container remove a1a85b448d3370406b4848571c1e59d2b019bd6f19241dfbd3400597a0819b4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bardeen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:51:38 compute-0 systemd[1]: libpod-conmon-a1a85b448d3370406b4848571c1e59d2b019bd6f19241dfbd3400597a0819b4d.scope: Deactivated successfully.
Feb 01 14:51:38 compute-0 podman[93118]: 2026-02-01 14:51:38.425676074 +0000 UTC m=+0.040822530 container create 888dd2cdb5e2aa7af1be1149908eafe208cd05d97ebbcb431d4ca7e00667a9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Feb 01 14:51:38 compute-0 systemd[1]: Started libpod-conmon-888dd2cdb5e2aa7af1be1149908eafe208cd05d97ebbcb431d4ca7e00667a9fd.scope.
Feb 01 14:51:38 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c00f7ef882c34806d11ae7b1d0514e7ef6b69fd86e3df161083d1cde127a67d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c00f7ef882c34806d11ae7b1d0514e7ef6b69fd86e3df161083d1cde127a67d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c00f7ef882c34806d11ae7b1d0514e7ef6b69fd86e3df161083d1cde127a67d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c00f7ef882c34806d11ae7b1d0514e7ef6b69fd86e3df161083d1cde127a67d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:38 compute-0 podman[93118]: 2026-02-01 14:51:38.498035737 +0000 UTC m=+0.113182163 container init 888dd2cdb5e2aa7af1be1149908eafe208cd05d97ebbcb431d4ca7e00667a9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_babbage, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 01 14:51:38 compute-0 podman[93118]: 2026-02-01 14:51:38.403086065 +0000 UTC m=+0.018232511 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:38 compute-0 podman[93118]: 2026-02-01 14:51:38.504947881 +0000 UTC m=+0.120094327 container start 888dd2cdb5e2aa7af1be1149908eafe208cd05d97ebbcb431d4ca7e00667a9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_babbage, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:38 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:51:38 compute-0 ceph-mgr[75469]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Feb 01 14:51:38 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Feb 01 14:51:38 compute-0 podman[93118]: 2026-02-01 14:51:38.509095934 +0000 UTC m=+0.124242370 container attach 888dd2cdb5e2aa7af1be1149908eafe208cd05d97ebbcb431d4ca7e00667a9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_babbage, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 01 14:51:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb 01 14:51:38 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:38 compute-0 cool_thompson[93052]: Scheduled mds.cephfs update...
Feb 01 14:51:38 compute-0 systemd[1]: libpod-e3a8ce8683ad8696c098e8d0a3962ee14094114d403afd91f671f9c361445c24.scope: Deactivated successfully.
Feb 01 14:51:38 compute-0 podman[93026]: 2026-02-01 14:51:38.52888144 +0000 UTC m=+0.543337242 container died e3a8ce8683ad8696c098e8d0a3962ee14094114d403afd91f671f9c361445c24 (image=quay.io/ceph/ceph:v20, name=cool_thompson, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Feb 01 14:51:38 compute-0 podman[93026]: 2026-02-01 14:51:38.570542764 +0000 UTC m=+0.584998516 container remove e3a8ce8683ad8696c098e8d0a3962ee14094114d403afd91f671f9c361445c24 (image=quay.io/ceph/ceph:v20, name=cool_thompson, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:38 compute-0 systemd[1]: libpod-conmon-e3a8ce8683ad8696c098e8d0a3962ee14094114d403afd91f671f9c361445c24.scope: Deactivated successfully.
Feb 01 14:51:38 compute-0 sudo[92994]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:38 compute-0 ceph-mon[75179]: from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:51:38 compute-0 ceph-mon[75179]: Saving service mds.cephfs spec with placement compute-0
Feb 01 14:51:38 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:38 compute-0 ceph-mon[75179]: pgmap v60: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:38 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-35fe62f2f994ae68bdbe3a1643b3429133b37ca57a2ec3b6d272fc2dcbb76727-merged.mount: Deactivated successfully.
Feb 01 14:51:39 compute-0 lvm[93228]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 14:51:39 compute-0 lvm[93229]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 14:51:39 compute-0 lvm[93228]: VG ceph_vg0 finished
Feb 01 14:51:39 compute-0 lvm[93229]: VG ceph_vg1 finished
Feb 01 14:51:39 compute-0 lvm[93231]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 14:51:39 compute-0 lvm[93231]: VG ceph_vg2 finished
Feb 01 14:51:39 compute-0 vigilant_babbage[93135]: {}
Feb 01 14:51:39 compute-0 systemd[1]: libpod-888dd2cdb5e2aa7af1be1149908eafe208cd05d97ebbcb431d4ca7e00667a9fd.scope: Deactivated successfully.
Feb 01 14:51:39 compute-0 systemd[1]: libpod-888dd2cdb5e2aa7af1be1149908eafe208cd05d97ebbcb431d4ca7e00667a9fd.scope: Consumed 1.044s CPU time.
Feb 01 14:51:39 compute-0 podman[93118]: 2026-02-01 14:51:39.309767535 +0000 UTC m=+0.924913951 container died 888dd2cdb5e2aa7af1be1149908eafe208cd05d97ebbcb431d4ca7e00667a9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_babbage, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:39 compute-0 sudo[93310]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffqomwidlocioftcsdgooinhuwegcauz ; /usr/bin/python3'
Feb 01 14:51:39 compute-0 sudo[93310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-c00f7ef882c34806d11ae7b1d0514e7ef6b69fd86e3df161083d1cde127a67d7-merged.mount: Deactivated successfully.
Feb 01 14:51:39 compute-0 podman[93118]: 2026-02-01 14:51:39.585101819 +0000 UTC m=+1.200248225 container remove 888dd2cdb5e2aa7af1be1149908eafe208cd05d97ebbcb431d4ca7e00667a9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_babbage, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 01 14:51:39 compute-0 python3[93317]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 01 14:51:39 compute-0 systemd[1]: libpod-conmon-888dd2cdb5e2aa7af1be1149908eafe208cd05d97ebbcb431d4ca7e00667a9fd.scope: Deactivated successfully.
Feb 01 14:51:39 compute-0 sudo[93310]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:39 compute-0 sudo[93001]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:39 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:51:39 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:39 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:51:39 compute-0 ceph-mon[75179]: from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 14:51:39 compute-0 ceph-mon[75179]: Saving service mds.cephfs spec with placement compute-0
Feb 01 14:51:39 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:39 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:39 compute-0 sudo[93341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 14:51:39 compute-0 sudo[93341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:39 compute-0 sudo[93341]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:39 compute-0 sudo[93396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:39 compute-0 sudo[93396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:39 compute-0 sudo[93396]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:39 compute-0 sudo[93442]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwnixmfwbxnhetwugbkivwffisolojxm ; /usr/bin/python3'
Feb 01 14:51:39 compute-0 sudo[93442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:39 compute-0 sudo[93447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 01 14:51:39 compute-0 sudo[93447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:39 compute-0 python3[93446]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769957499.208748-36618-277888170047934/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=9e80b5c3ad70771b2808c3ea209191214d8953f2 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:51:39 compute-0 sudo[93442]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:40 compute-0 podman[93541]: 2026-02-01 14:51:40.263938733 +0000 UTC m=+0.078203887 container exec 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 01 14:51:40 compute-0 sudo[93584]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wamusqvnpzstnhusddtrseamdswqfhwj ; /usr/bin/python3'
Feb 01 14:51:40 compute-0 sudo[93584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:40 compute-0 podman[93541]: 2026-02-01 14:51:40.359501203 +0000 UTC m=+0.173766337 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:51:40 compute-0 python3[93586]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:40 compute-0 podman[93613]: 2026-02-01 14:51:40.518505912 +0000 UTC m=+0.045318794 container create d825f5b38a3765b164e5ba6697c9825d4cdc07d3b8990506d89b56d8e1de9ab5 (image=quay.io/ceph/ceph:v20, name=sweet_faraday, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:40 compute-0 systemd[1]: Started libpod-conmon-d825f5b38a3765b164e5ba6697c9825d4cdc07d3b8990506d89b56d8e1de9ab5.scope.
Feb 01 14:51:40 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fef96d11df7d1d9982a99c5023b183d696e278b6c2bd236093b5807676574a3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fef96d11df7d1d9982a99c5023b183d696e278b6c2bd236093b5807676574a3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:40 compute-0 podman[93613]: 2026-02-01 14:51:40.499819158 +0000 UTC m=+0.026632070 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:40 compute-0 podman[93613]: 2026-02-01 14:51:40.608422554 +0000 UTC m=+0.135235586 container init d825f5b38a3765b164e5ba6697c9825d4cdc07d3b8990506d89b56d8e1de9ab5 (image=quay.io/ceph/ceph:v20, name=sweet_faraday, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:40 compute-0 podman[93613]: 2026-02-01 14:51:40.614470403 +0000 UTC m=+0.141283275 container start d825f5b38a3765b164e5ba6697c9825d4cdc07d3b8990506d89b56d8e1de9ab5 (image=quay.io/ceph/ceph:v20, name=sweet_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 01 14:51:40 compute-0 podman[93613]: 2026-02-01 14:51:40.617782482 +0000 UTC m=+0.144595424 container attach d825f5b38a3765b164e5ba6697c9825d4cdc07d3b8990506d89b56d8e1de9ab5 (image=quay.io/ceph/ceph:v20, name=sweet_faraday, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 01 14:51:40 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:40 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:40 compute-0 ceph-mon[75179]: pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:40 compute-0 sudo[93447]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:51:40 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:51:40 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:51:40 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 14:51:40 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:51:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 14:51:40 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 14:51:41 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:51:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 14:51:41 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:51:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:51:41 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:41 compute-0 sudo[93750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:41 compute-0 sudo[93750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:41 compute-0 sudo[93750]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Feb 01 14:51:41 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/437174073' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Feb 01 14:51:41 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/437174073' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Feb 01 14:51:41 compute-0 systemd[1]: libpod-d825f5b38a3765b164e5ba6697c9825d4cdc07d3b8990506d89b56d8e1de9ab5.scope: Deactivated successfully.
Feb 01 14:51:41 compute-0 podman[93613]: 2026-02-01 14:51:41.144640314 +0000 UTC m=+0.671453196 container died d825f5b38a3765b164e5ba6697c9825d4cdc07d3b8990506d89b56d8e1de9ab5 (image=quay.io/ceph/ceph:v20, name=sweet_faraday, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:51:41 compute-0 sudo[93775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 14:51:41 compute-0 sudo[93775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fef96d11df7d1d9982a99c5023b183d696e278b6c2bd236093b5807676574a3-merged.mount: Deactivated successfully.
Feb 01 14:51:41 compute-0 podman[93613]: 2026-02-01 14:51:41.180678022 +0000 UTC m=+0.707490904 container remove d825f5b38a3765b164e5ba6697c9825d4cdc07d3b8990506d89b56d8e1de9ab5 (image=quay.io/ceph/ceph:v20, name=sweet_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 01 14:51:41 compute-0 systemd[1]: libpod-conmon-d825f5b38a3765b164e5ba6697c9825d4cdc07d3b8990506d89b56d8e1de9ab5.scope: Deactivated successfully.
Feb 01 14:51:41 compute-0 sudo[93584]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:41 compute-0 podman[93826]: 2026-02-01 14:51:41.462049784 +0000 UTC m=+0.065034967 container create de147f3e13a5624c84871188eded8616b75a3df58e1b9469712c1f54a6c78816 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 01 14:51:41 compute-0 systemd[1]: Started libpod-conmon-de147f3e13a5624c84871188eded8616b75a3df58e1b9469712c1f54a6c78816.scope.
Feb 01 14:51:41 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:41 compute-0 podman[93826]: 2026-02-01 14:51:41.433406386 +0000 UTC m=+0.036391609 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:41 compute-0 podman[93826]: 2026-02-01 14:51:41.540826767 +0000 UTC m=+0.143811930 container init de147f3e13a5624c84871188eded8616b75a3df58e1b9469712c1f54a6c78816 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bassi, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 01 14:51:41 compute-0 podman[93826]: 2026-02-01 14:51:41.549284918 +0000 UTC m=+0.152270091 container start de147f3e13a5624c84871188eded8616b75a3df58e1b9469712c1f54a6c78816 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:41 compute-0 podman[93826]: 2026-02-01 14:51:41.553901445 +0000 UTC m=+0.156886608 container attach de147f3e13a5624c84871188eded8616b75a3df58e1b9469712c1f54a6c78816 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:41 compute-0 objective_bassi[93843]: 167 167
Feb 01 14:51:41 compute-0 systemd[1]: libpod-de147f3e13a5624c84871188eded8616b75a3df58e1b9469712c1f54a6c78816.scope: Deactivated successfully.
Feb 01 14:51:41 compute-0 podman[93826]: 2026-02-01 14:51:41.557226693 +0000 UTC m=+0.160211856 container died de147f3e13a5624c84871188eded8616b75a3df58e1b9469712c1f54a6c78816 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bassi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-66d6fbd0a98567dbc88615da73ba550ce8f812b67b11c9df08504d5028bff6ec-merged.mount: Deactivated successfully.
Feb 01 14:51:41 compute-0 podman[93826]: 2026-02-01 14:51:41.592886339 +0000 UTC m=+0.195871492 container remove de147f3e13a5624c84871188eded8616b75a3df58e1b9469712c1f54a6c78816 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bassi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 01 14:51:41 compute-0 systemd[1]: libpod-conmon-de147f3e13a5624c84871188eded8616b75a3df58e1b9469712c1f54a6c78816.scope: Deactivated successfully.
Feb 01 14:51:41 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:41 compute-0 sudo[93903]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmrjmjuhadembmeniqceoxshiwavlkgv ; /usr/bin/python3'
Feb 01 14:51:41 compute-0 sudo[93903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:41 compute-0 podman[93866]: 2026-02-01 14:51:41.790225123 +0000 UTC m=+0.064045268 container create 7dd015145bfa3b08c73eeea19c3729dd5091475095ccb39cbe5ae4ea859b5656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_knuth, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 01 14:51:41 compute-0 systemd[1]: Started libpod-conmon-7dd015145bfa3b08c73eeea19c3729dd5091475095ccb39cbe5ae4ea859b5656.scope.
Feb 01 14:51:41 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaeea464b81460ef3f1a460110c99489bf36ad9d72ef90c13a603743d6210b88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaeea464b81460ef3f1a460110c99489bf36ad9d72ef90c13a603743d6210b88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaeea464b81460ef3f1a460110c99489bf36ad9d72ef90c13a603743d6210b88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:41 compute-0 podman[93866]: 2026-02-01 14:51:41.767869811 +0000 UTC m=+0.041689986 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaeea464b81460ef3f1a460110c99489bf36ad9d72ef90c13a603743d6210b88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaeea464b81460ef3f1a460110c99489bf36ad9d72ef90c13a603743d6210b88/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:41 compute-0 podman[93866]: 2026-02-01 14:51:41.904995512 +0000 UTC m=+0.178815687 container init 7dd015145bfa3b08c73eeea19c3729dd5091475095ccb39cbe5ae4ea859b5656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 01 14:51:41 compute-0 podman[93866]: 2026-02-01 14:51:41.912642578 +0000 UTC m=+0.186462713 container start 7dd015145bfa3b08c73eeea19c3729dd5091475095ccb39cbe5ae4ea859b5656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_knuth, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:41 compute-0 podman[93866]: 2026-02-01 14:51:41.916531144 +0000 UTC m=+0.190351309 container attach 7dd015145bfa3b08c73eeea19c3729dd5091475095ccb39cbe5ae4ea859b5656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_knuth, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 01 14:51:41 compute-0 python3[93905]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:41 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:41 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:41 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:41 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:51:41 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:41 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:51:41 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:51:41 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:41 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/437174073' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Feb 01 14:51:41 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/437174073' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Feb 01 14:51:42 compute-0 podman[93914]: 2026-02-01 14:51:42.057510309 +0000 UTC m=+0.085603016 container create bf86d1289279c060aa9d68e514172d4b82839b6161dcd15f6e53dadb32ae2cab (image=quay.io/ceph/ceph:v20, name=fervent_bardeen, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:51:42 compute-0 systemd[1]: Started libpod-conmon-bf86d1289279c060aa9d68e514172d4b82839b6161dcd15f6e53dadb32ae2cab.scope.
Feb 01 14:51:42 compute-0 podman[93914]: 2026-02-01 14:51:42.013216097 +0000 UTC m=+0.041308864 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:42 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ff9d7db27bb552feff2d397ff0875b5086ba12ff552328973a377796ca71e3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ff9d7db27bb552feff2d397ff0875b5086ba12ff552328973a377796ca71e3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:42 compute-0 podman[93914]: 2026-02-01 14:51:42.152650546 +0000 UTC m=+0.180743263 container init bf86d1289279c060aa9d68e514172d4b82839b6161dcd15f6e53dadb32ae2cab (image=quay.io/ceph/ceph:v20, name=fervent_bardeen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Feb 01 14:51:42 compute-0 podman[93914]: 2026-02-01 14:51:42.162748455 +0000 UTC m=+0.190841122 container start bf86d1289279c060aa9d68e514172d4b82839b6161dcd15f6e53dadb32ae2cab (image=quay.io/ceph/ceph:v20, name=fervent_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:42 compute-0 podman[93914]: 2026-02-01 14:51:42.165851447 +0000 UTC m=+0.193944124 container attach bf86d1289279c060aa9d68e514172d4b82839b6161dcd15f6e53dadb32ae2cab (image=quay.io/ceph/ceph:v20, name=fervent_bardeen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 01 14:51:42 compute-0 gallant_knuth[93908]: --> passed data devices: 0 physical, 3 LVM
Feb 01 14:51:42 compute-0 gallant_knuth[93908]: --> All data devices are unavailable
Feb 01 14:51:42 compute-0 systemd[1]: libpod-7dd015145bfa3b08c73eeea19c3729dd5091475095ccb39cbe5ae4ea859b5656.scope: Deactivated successfully.
Feb 01 14:51:42 compute-0 conmon[93908]: conmon 7dd015145bfa3b08c73e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7dd015145bfa3b08c73eeea19c3729dd5091475095ccb39cbe5ae4ea859b5656.scope/container/memory.events
Feb 01 14:51:42 compute-0 podman[93866]: 2026-02-01 14:51:42.456230206 +0000 UTC m=+0.730050341 container died 7dd015145bfa3b08c73eeea19c3729dd5091475095ccb39cbe5ae4ea859b5656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Feb 01 14:51:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-aaeea464b81460ef3f1a460110c99489bf36ad9d72ef90c13a603743d6210b88-merged.mount: Deactivated successfully.
Feb 01 14:51:42 compute-0 podman[93866]: 2026-02-01 14:51:42.507947977 +0000 UTC m=+0.781768132 container remove 7dd015145bfa3b08c73eeea19c3729dd5091475095ccb39cbe5ae4ea859b5656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 01 14:51:42 compute-0 systemd[1]: libpod-conmon-7dd015145bfa3b08c73eeea19c3729dd5091475095ccb39cbe5ae4ea859b5656.scope: Deactivated successfully.
Feb 01 14:51:42 compute-0 sudo[93775]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:42 compute-0 sudo[93979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:42 compute-0 sudo[93979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:42 compute-0 sudo[93979]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:42 compute-0 sudo[94004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 14:51:42 compute-0 sudo[94004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:42 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb 01 14:51:42 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2690848245' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 01 14:51:42 compute-0 fervent_bardeen[93932]: 
Feb 01 14:51:42 compute-0 fervent_bardeen[93932]: {"fsid":"2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":102,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":29,"num_osds":3,"num_up_osds":3,"osd_up_since":1769957475,"num_in_osds":3,"osd_in_since":1769957454,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83808256,"bytes_avail":64328118272,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2026-02-01T14:51:37:585930+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-02-01T14:51:19.699816+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Feb 01 14:51:42 compute-0 systemd[1]: libpod-bf86d1289279c060aa9d68e514172d4b82839b6161dcd15f6e53dadb32ae2cab.scope: Deactivated successfully.
Feb 01 14:51:42 compute-0 podman[93914]: 2026-02-01 14:51:42.755916901 +0000 UTC m=+0.784009568 container died bf86d1289279c060aa9d68e514172d4b82839b6161dcd15f6e53dadb32ae2cab (image=quay.io/ceph/ceph:v20, name=fervent_bardeen, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 01 14:51:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-75ff9d7db27bb552feff2d397ff0875b5086ba12ff552328973a377796ca71e3-merged.mount: Deactivated successfully.
Feb 01 14:51:42 compute-0 podman[93914]: 2026-02-01 14:51:42.808520549 +0000 UTC m=+0.836613246 container remove bf86d1289279c060aa9d68e514172d4b82839b6161dcd15f6e53dadb32ae2cab (image=quay.io/ceph/ceph:v20, name=fervent_bardeen, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 01 14:51:42 compute-0 systemd[1]: libpod-conmon-bf86d1289279c060aa9d68e514172d4b82839b6161dcd15f6e53dadb32ae2cab.scope: Deactivated successfully.
Feb 01 14:51:42 compute-0 sudo[93903]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:42 compute-0 ceph-mon[75179]: pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:42 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2690848245' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 01 14:51:43 compute-0 sudo[94080]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkjaxrfznpkeimcpbfapwdkzlodqxvld ; /usr/bin/python3'
Feb 01 14:51:43 compute-0 sudo[94080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:43 compute-0 podman[94081]: 2026-02-01 14:51:43.067613552 +0000 UTC m=+0.062029508 container create 0bb0d6ff86e7fc8db501752654d9df02d7adfb6c45467f31fd3123f47a2df976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:43 compute-0 systemd[1]: Started libpod-conmon-0bb0d6ff86e7fc8db501752654d9df02d7adfb6c45467f31fd3123f47a2df976.scope.
Feb 01 14:51:43 compute-0 podman[94081]: 2026-02-01 14:51:43.040160579 +0000 UTC m=+0.034576565 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:43 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:43 compute-0 podman[94081]: 2026-02-01 14:51:43.153576608 +0000 UTC m=+0.147992604 container init 0bb0d6ff86e7fc8db501752654d9df02d7adfb6c45467f31fd3123f47a2df976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:43 compute-0 podman[94081]: 2026-02-01 14:51:43.15907694 +0000 UTC m=+0.153492876 container start 0bb0d6ff86e7fc8db501752654d9df02d7adfb6c45467f31fd3123f47a2df976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_faraday, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 01 14:51:43 compute-0 podman[94081]: 2026-02-01 14:51:43.162611585 +0000 UTC m=+0.157027591 container attach 0bb0d6ff86e7fc8db501752654d9df02d7adfb6c45467f31fd3123f47a2df976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:43 compute-0 python3[94089]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:43 compute-0 happy_faraday[94099]: 167 167
Feb 01 14:51:43 compute-0 systemd[1]: libpod-0bb0d6ff86e7fc8db501752654d9df02d7adfb6c45467f31fd3123f47a2df976.scope: Deactivated successfully.
Feb 01 14:51:43 compute-0 podman[94081]: 2026-02-01 14:51:43.166080318 +0000 UTC m=+0.160496274 container died 0bb0d6ff86e7fc8db501752654d9df02d7adfb6c45467f31fd3123f47a2df976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_faraday, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9be3257d2caaeeb741c89502401de8f38842c12712a1672a4b93945142a68b0-merged.mount: Deactivated successfully.
Feb 01 14:51:43 compute-0 podman[94081]: 2026-02-01 14:51:43.210226035 +0000 UTC m=+0.204642011 container remove 0bb0d6ff86e7fc8db501752654d9df02d7adfb6c45467f31fd3123f47a2df976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 01 14:51:43 compute-0 systemd[1]: libpod-conmon-0bb0d6ff86e7fc8db501752654d9df02d7adfb6c45467f31fd3123f47a2df976.scope: Deactivated successfully.
Feb 01 14:51:43 compute-0 podman[94105]: 2026-02-01 14:51:43.234748531 +0000 UTC m=+0.051962519 container create bde5ac378b02cc7654475dc04f84e8f48745978ab29e505d028b431d9a3b4bdb (image=quay.io/ceph/ceph:v20, name=reverent_joliot, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:43 compute-0 systemd[1]: Started libpod-conmon-bde5ac378b02cc7654475dc04f84e8f48745978ab29e505d028b431d9a3b4bdb.scope.
Feb 01 14:51:43 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3227db181a82b92088e9b0f3bc712d6623639b82e3193d9cf7910c151241859/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3227db181a82b92088e9b0f3bc712d6623639b82e3193d9cf7910c151241859/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:43 compute-0 podman[94105]: 2026-02-01 14:51:43.300834078 +0000 UTC m=+0.118048086 container init bde5ac378b02cc7654475dc04f84e8f48745978ab29e505d028b431d9a3b4bdb (image=quay.io/ceph/ceph:v20, name=reverent_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 01 14:51:43 compute-0 podman[94105]: 2026-02-01 14:51:43.305562449 +0000 UTC m=+0.122776447 container start bde5ac378b02cc7654475dc04f84e8f48745978ab29e505d028b431d9a3b4bdb (image=quay.io/ceph/ceph:v20, name=reverent_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 01 14:51:43 compute-0 podman[94105]: 2026-02-01 14:51:43.309056682 +0000 UTC m=+0.126270700 container attach bde5ac378b02cc7654475dc04f84e8f48745978ab29e505d028b431d9a3b4bdb (image=quay.io/ceph/ceph:v20, name=reverent_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 01 14:51:43 compute-0 podman[94105]: 2026-02-01 14:51:43.218571532 +0000 UTC m=+0.035785540 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:43 compute-0 podman[94144]: 2026-02-01 14:51:43.377843089 +0000 UTC m=+0.045282422 container create 0bdaad1ccf6e12e6fd0875f1a71ea6702fae9a0e5a9c17b586dbc3a738af5f9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Feb 01 14:51:43 compute-0 systemd[1]: Started libpod-conmon-0bdaad1ccf6e12e6fd0875f1a71ea6702fae9a0e5a9c17b586dbc3a738af5f9e.scope.
Feb 01 14:51:43 compute-0 podman[94144]: 2026-02-01 14:51:43.35897286 +0000 UTC m=+0.026412233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:43 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81d650f810cd9aaeb7f2cd14e21ebec59051c318f7a57c05fe748fbeb75e93b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81d650f810cd9aaeb7f2cd14e21ebec59051c318f7a57c05fe748fbeb75e93b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81d650f810cd9aaeb7f2cd14e21ebec59051c318f7a57c05fe748fbeb75e93b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81d650f810cd9aaeb7f2cd14e21ebec59051c318f7a57c05fe748fbeb75e93b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:43 compute-0 podman[94144]: 2026-02-01 14:51:43.478781158 +0000 UTC m=+0.146220531 container init 0bdaad1ccf6e12e6fd0875f1a71ea6702fae9a0e5a9c17b586dbc3a738af5f9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 01 14:51:43 compute-0 podman[94144]: 2026-02-01 14:51:43.487444005 +0000 UTC m=+0.154883338 container start 0bdaad1ccf6e12e6fd0875f1a71ea6702fae9a0e5a9c17b586dbc3a738af5f9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hellman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 01 14:51:43 compute-0 podman[94144]: 2026-02-01 14:51:43.491759883 +0000 UTC m=+0.159199176 container attach 0bdaad1ccf6e12e6fd0875f1a71ea6702fae9a0e5a9c17b586dbc3a738af5f9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hellman, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 01 14:51:43 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]: {
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:     "0": [
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:         {
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "devices": [
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "/dev/loop3"
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             ],
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "lv_name": "ceph_lv0",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "lv_size": "21470642176",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "name": "ceph_lv0",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "tags": {
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.cluster_name": "ceph",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.crush_device_class": "",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.encrypted": "0",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.objectstore": "bluestore",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.osd_id": "0",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.type": "block",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.vdo": "0",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.with_tpm": "0"
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             },
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "type": "block",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "vg_name": "ceph_vg0"
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:         }
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:     ],
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:     "1": [
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:         {
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "devices": [
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "/dev/loop4"
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             ],
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "lv_name": "ceph_lv1",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "lv_size": "21470642176",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "name": "ceph_lv1",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "tags": {
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.cluster_name": "ceph",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.crush_device_class": "",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.encrypted": "0",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.objectstore": "bluestore",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.osd_id": "1",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.type": "block",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.vdo": "0",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.with_tpm": "0"
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             },
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "type": "block",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "vg_name": "ceph_vg1"
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:         }
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:     ],
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:     "2": [
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:         {
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "devices": [
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "/dev/loop5"
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             ],
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "lv_name": "ceph_lv2",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "lv_size": "21470642176",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "name": "ceph_lv2",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "tags": {
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.cluster_name": "ceph",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.crush_device_class": "",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.encrypted": "0",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.objectstore": "bluestore",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.osd_id": "2",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.type": "block",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.vdo": "0",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:                 "ceph.with_tpm": "0"
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             },
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "type": "block",
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:             "vg_name": "ceph_vg2"
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:         }
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]:     ]
Feb 01 14:51:43 compute-0 compassionate_hellman[94178]: }
Feb 01 14:51:43 compute-0 systemd[1]: libpod-0bdaad1ccf6e12e6fd0875f1a71ea6702fae9a0e5a9c17b586dbc3a738af5f9e.scope: Deactivated successfully.
Feb 01 14:51:43 compute-0 podman[94144]: 2026-02-01 14:51:43.773792215 +0000 UTC m=+0.441231508 container died 0bdaad1ccf6e12e6fd0875f1a71ea6702fae9a0e5a9c17b586dbc3a738af5f9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Feb 01 14:51:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-81d650f810cd9aaeb7f2cd14e21ebec59051c318f7a57c05fe748fbeb75e93b9-merged.mount: Deactivated successfully.
Feb 01 14:51:43 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 14:51:43 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3243915552' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 14:51:43 compute-0 podman[94144]: 2026-02-01 14:51:43.821871849 +0000 UTC m=+0.489311152 container remove 0bdaad1ccf6e12e6fd0875f1a71ea6702fae9a0e5a9c17b586dbc3a738af5f9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 01 14:51:43 compute-0 reverent_joliot[94135]: 
Feb 01 14:51:43 compute-0 reverent_joliot[94135]: {"epoch":1,"fsid":"2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f","modified":"2026-02-01T14:49:56.174590Z","created":"2026-02-01T14:49:56.174590Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Feb 01 14:51:43 compute-0 reverent_joliot[94135]: dumped monmap epoch 1
Feb 01 14:51:43 compute-0 systemd[1]: libpod-conmon-0bdaad1ccf6e12e6fd0875f1a71ea6702fae9a0e5a9c17b586dbc3a738af5f9e.scope: Deactivated successfully.
Feb 01 14:51:43 compute-0 systemd[1]: libpod-bde5ac378b02cc7654475dc04f84e8f48745978ab29e505d028b431d9a3b4bdb.scope: Deactivated successfully.
Feb 01 14:51:43 compute-0 podman[94105]: 2026-02-01 14:51:43.844051436 +0000 UTC m=+0.661265424 container died bde5ac378b02cc7654475dc04f84e8f48745978ab29e505d028b431d9a3b4bdb (image=quay.io/ceph/ceph:v20, name=reverent_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3227db181a82b92088e9b0f3bc712d6623639b82e3193d9cf7910c151241859-merged.mount: Deactivated successfully.
Feb 01 14:51:43 compute-0 sudo[94004]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:43 compute-0 podman[94105]: 2026-02-01 14:51:43.881087283 +0000 UTC m=+0.698301311 container remove bde5ac378b02cc7654475dc04f84e8f48745978ab29e505d028b431d9a3b4bdb (image=quay.io/ceph/ceph:v20, name=reverent_joliot, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 01 14:51:43 compute-0 systemd[1]: libpod-conmon-bde5ac378b02cc7654475dc04f84e8f48745978ab29e505d028b431d9a3b4bdb.scope: Deactivated successfully.
Feb 01 14:51:43 compute-0 sudo[94080]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:43 compute-0 sudo[94215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:43 compute-0 sudo[94215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:43 compute-0 sudo[94215]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:44 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3243915552' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 14:51:44 compute-0 sudo[94240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 14:51:44 compute-0 sudo[94240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:44 compute-0 sudo[94295]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umplsefzxmuxzonvjeedfhotwwmbkztd ; /usr/bin/python3'
Feb 01 14:51:44 compute-0 sudo[94295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:44 compute-0 podman[94302]: 2026-02-01 14:51:44.297486014 +0000 UTC m=+0.044444737 container create 5bac4ea700ec32d83fb19bba2d7a0b8369ac3be5e6bdae69414d6477eb3ed5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 01 14:51:44 compute-0 systemd[1]: Started libpod-conmon-5bac4ea700ec32d83fb19bba2d7a0b8369ac3be5e6bdae69414d6477eb3ed5a8.scope.
Feb 01 14:51:44 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:44 compute-0 podman[94302]: 2026-02-01 14:51:44.371658541 +0000 UTC m=+0.118617324 container init 5bac4ea700ec32d83fb19bba2d7a0b8369ac3be5e6bdae69414d6477eb3ed5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jennings, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 01 14:51:44 compute-0 podman[94302]: 2026-02-01 14:51:44.277746669 +0000 UTC m=+0.024705452 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:44 compute-0 podman[94302]: 2026-02-01 14:51:44.379012378 +0000 UTC m=+0.125971131 container start 5bac4ea700ec32d83fb19bba2d7a0b8369ac3be5e6bdae69414d6477eb3ed5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 01 14:51:44 compute-0 brave_jennings[94318]: 167 167
Feb 01 14:51:44 compute-0 systemd[1]: libpod-5bac4ea700ec32d83fb19bba2d7a0b8369ac3be5e6bdae69414d6477eb3ed5a8.scope: Deactivated successfully.
Feb 01 14:51:44 compute-0 podman[94302]: 2026-02-01 14:51:44.384954894 +0000 UTC m=+0.131913657 container attach 5bac4ea700ec32d83fb19bba2d7a0b8369ac3be5e6bdae69414d6477eb3ed5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 01 14:51:44 compute-0 podman[94302]: 2026-02-01 14:51:44.385383377 +0000 UTC m=+0.132342140 container died 5bac4ea700ec32d83fb19bba2d7a0b8369ac3be5e6bdae69414d6477eb3ed5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jennings, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:51:44 compute-0 python3[94301]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-18e8a0603801f0e2d51b3942ad64928cd50f876ed6fe4c371abdd9f91e38f095-merged.mount: Deactivated successfully.
Feb 01 14:51:44 compute-0 podman[94302]: 2026-02-01 14:51:44.421590659 +0000 UTC m=+0.168549422 container remove 5bac4ea700ec32d83fb19bba2d7a0b8369ac3be5e6bdae69414d6477eb3ed5a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jennings, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 01 14:51:44 compute-0 systemd[1]: libpod-conmon-5bac4ea700ec32d83fb19bba2d7a0b8369ac3be5e6bdae69414d6477eb3ed5a8.scope: Deactivated successfully.
Feb 01 14:51:44 compute-0 podman[94331]: 2026-02-01 14:51:44.493153739 +0000 UTC m=+0.068766198 container create 1fcc6aa648f1b4f937dd745deb0f5706db0fd20b0cae4882754ea6cee275a5a1 (image=quay.io/ceph/ceph:v20, name=awesome_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 01 14:51:44 compute-0 systemd[1]: Started libpod-conmon-1fcc6aa648f1b4f937dd745deb0f5706db0fd20b0cae4882754ea6cee275a5a1.scope.
Feb 01 14:51:44 compute-0 podman[94331]: 2026-02-01 14:51:44.470348103 +0000 UTC m=+0.045960612 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:44 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46126b5fc918f374e7f6e31b19d986a78afcad6bf30dfd513589806c18af886/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46126b5fc918f374e7f6e31b19d986a78afcad6bf30dfd513589806c18af886/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:44 compute-0 podman[94356]: 2026-02-01 14:51:44.573437156 +0000 UTC m=+0.049937620 container create 053e4a076b2c3f54e621e5ff239dc35956944e270c0b5760ce32aff3de701d35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Feb 01 14:51:44 compute-0 podman[94331]: 2026-02-01 14:51:44.589578294 +0000 UTC m=+0.165190763 container init 1fcc6aa648f1b4f937dd745deb0f5706db0fd20b0cae4882754ea6cee275a5a1 (image=quay.io/ceph/ceph:v20, name=awesome_albattani, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 01 14:51:44 compute-0 podman[94331]: 2026-02-01 14:51:44.5968656 +0000 UTC m=+0.172478069 container start 1fcc6aa648f1b4f937dd745deb0f5706db0fd20b0cae4882754ea6cee275a5a1 (image=quay.io/ceph/ceph:v20, name=awesome_albattani, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Feb 01 14:51:44 compute-0 podman[94331]: 2026-02-01 14:51:44.600061435 +0000 UTC m=+0.175673904 container attach 1fcc6aa648f1b4f937dd745deb0f5706db0fd20b0cae4882754ea6cee275a5a1 (image=quay.io/ceph/ceph:v20, name=awesome_albattani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 01 14:51:44 compute-0 systemd[1]: Started libpod-conmon-053e4a076b2c3f54e621e5ff239dc35956944e270c0b5760ce32aff3de701d35.scope.
Feb 01 14:51:44 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3a0627f36aabec1e1400747937a2c29c7ab21ad6884262c860b8ab3a0516fb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:44 compute-0 podman[94356]: 2026-02-01 14:51:44.548632792 +0000 UTC m=+0.025133306 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3a0627f36aabec1e1400747937a2c29c7ab21ad6884262c860b8ab3a0516fb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3a0627f36aabec1e1400747937a2c29c7ab21ad6884262c860b8ab3a0516fb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3a0627f36aabec1e1400747937a2c29c7ab21ad6884262c860b8ab3a0516fb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:44 compute-0 podman[94356]: 2026-02-01 14:51:44.659988589 +0000 UTC m=+0.136489073 container init 053e4a076b2c3f54e621e5ff239dc35956944e270c0b5760ce32aff3de701d35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Feb 01 14:51:44 compute-0 podman[94356]: 2026-02-01 14:51:44.665658627 +0000 UTC m=+0.142159101 container start 053e4a076b2c3f54e621e5ff239dc35956944e270c0b5760ce32aff3de701d35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 01 14:51:44 compute-0 podman[94356]: 2026-02-01 14:51:44.668851942 +0000 UTC m=+0.145352416 container attach 053e4a076b2c3f54e621e5ff239dc35956944e270c0b5760ce32aff3de701d35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 01 14:51:45 compute-0 ceph-mon[75179]: pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Feb 01 14:51:45 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1656115226' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Feb 01 14:51:45 compute-0 awesome_albattani[94372]: [client.openstack]
Feb 01 14:51:45 compute-0 awesome_albattani[94372]:         key = AQD1Z39pAAAAABAAx9bXBCrv3oQqUCtEn4NgxQ==
Feb 01 14:51:45 compute-0 awesome_albattani[94372]:         caps mgr = "allow *"
Feb 01 14:51:45 compute-0 awesome_albattani[94372]:         caps mon = "profile rbd"
Feb 01 14:51:45 compute-0 awesome_albattani[94372]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Feb 01 14:51:45 compute-0 systemd[1]: libpod-1fcc6aa648f1b4f937dd745deb0f5706db0fd20b0cae4882754ea6cee275a5a1.scope: Deactivated successfully.
Feb 01 14:51:45 compute-0 podman[94331]: 2026-02-01 14:51:45.122661921 +0000 UTC m=+0.698274380 container died 1fcc6aa648f1b4f937dd745deb0f5706db0fd20b0cae4882754ea6cee275a5a1 (image=quay.io/ceph/ceph:v20, name=awesome_albattani, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c46126b5fc918f374e7f6e31b19d986a78afcad6bf30dfd513589806c18af886-merged.mount: Deactivated successfully.
Feb 01 14:51:45 compute-0 podman[94331]: 2026-02-01 14:51:45.156171374 +0000 UTC m=+0.731783833 container remove 1fcc6aa648f1b4f937dd745deb0f5706db0fd20b0cae4882754ea6cee275a5a1 (image=quay.io/ceph/ceph:v20, name=awesome_albattani, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:51:45 compute-0 systemd[1]: libpod-conmon-1fcc6aa648f1b4f937dd745deb0f5706db0fd20b0cae4882754ea6cee275a5a1.scope: Deactivated successfully.
Feb 01 14:51:45 compute-0 sudo[94295]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:45 compute-0 lvm[94490]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 14:51:45 compute-0 lvm[94490]: VG ceph_vg1 finished
Feb 01 14:51:45 compute-0 lvm[94489]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 14:51:45 compute-0 lvm[94489]: VG ceph_vg0 finished
Feb 01 14:51:45 compute-0 lvm[94492]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 14:51:45 compute-0 lvm[94492]: VG ceph_vg2 finished
Feb 01 14:51:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:51:45 compute-0 zen_rubin[94379]: {}
Feb 01 14:51:45 compute-0 podman[94356]: 2026-02-01 14:51:45.463948248 +0000 UTC m=+0.940448722 container died 053e4a076b2c3f54e621e5ff239dc35956944e270c0b5760ce32aff3de701d35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Feb 01 14:51:45 compute-0 systemd[1]: libpod-053e4a076b2c3f54e621e5ff239dc35956944e270c0b5760ce32aff3de701d35.scope: Deactivated successfully.
Feb 01 14:51:45 compute-0 systemd[1]: libpod-053e4a076b2c3f54e621e5ff239dc35956944e270c0b5760ce32aff3de701d35.scope: Consumed 1.270s CPU time.
Feb 01 14:51:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3a0627f36aabec1e1400747937a2c29c7ab21ad6884262c860b8ab3a0516fb1-merged.mount: Deactivated successfully.
Feb 01 14:51:45 compute-0 podman[94356]: 2026-02-01 14:51:45.514380422 +0000 UTC m=+0.990880866 container remove 053e4a076b2c3f54e621e5ff239dc35956944e270c0b5760ce32aff3de701d35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rubin, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:45 compute-0 systemd[1]: libpod-conmon-053e4a076b2c3f54e621e5ff239dc35956944e270c0b5760ce32aff3de701d35.scope: Deactivated successfully.
Feb 01 14:51:45 compute-0 sudo[94240]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:51:45 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:51:45 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:45 compute-0 ceph-mgr[75469]: [progress INFO root] update: starting ev 886d3577-b177-476c-87ab-959186f1d739 (Updating rgw.rgw deployment (+1 -> 1))
Feb 01 14:51:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.eusbkm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Feb 01 14:51:45 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.eusbkm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Feb 01 14:51:45 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.eusbkm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb 01 14:51:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Feb 01 14:51:45 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:51:45 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:45 compute-0 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.eusbkm on compute-0
Feb 01 14:51:45 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.eusbkm on compute-0
Feb 01 14:51:45 compute-0 sudo[94508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:45 compute-0 sudo[94508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:45 compute-0 sudo[94508]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:45 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:45 compute-0 sudo[94533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:51:45 compute-0 sudo[94533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:46 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1656115226' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Feb 01 14:51:46 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:46 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:46 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.eusbkm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Feb 01 14:51:46 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.eusbkm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb 01 14:51:46 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:46 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:46 compute-0 podman[94602]: 2026-02-01 14:51:46.121946788 +0000 UTC m=+0.046579304 container create f218575825c2ad991021a51c4f06baff0a580290758443a0fcc37d8ff615f3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_buck, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:46 compute-0 systemd[1]: Started libpod-conmon-f218575825c2ad991021a51c4f06baff0a580290758443a0fcc37d8ff615f3ff.scope.
Feb 01 14:51:46 compute-0 podman[94602]: 2026-02-01 14:51:46.095280167 +0000 UTC m=+0.019912693 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:46 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:46 compute-0 podman[94602]: 2026-02-01 14:51:46.228685637 +0000 UTC m=+0.153318163 container init f218575825c2ad991021a51c4f06baff0a580290758443a0fcc37d8ff615f3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_buck, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:46 compute-0 podman[94602]: 2026-02-01 14:51:46.237830695 +0000 UTC m=+0.162463221 container start f218575825c2ad991021a51c4f06baff0a580290758443a0fcc37d8ff615f3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_buck, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 01 14:51:46 compute-0 podman[94602]: 2026-02-01 14:51:46.242034713 +0000 UTC m=+0.166667239 container attach f218575825c2ad991021a51c4f06baff0a580290758443a0fcc37d8ff615f3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_buck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 01 14:51:46 compute-0 eager_buck[94664]: 167 167
Feb 01 14:51:46 compute-0 systemd[1]: libpod-f218575825c2ad991021a51c4f06baff0a580290758443a0fcc37d8ff615f3ff.scope: Deactivated successfully.
Feb 01 14:51:46 compute-0 podman[94602]: 2026-02-01 14:51:46.245151971 +0000 UTC m=+0.169784497 container died f218575825c2ad991021a51c4f06baff0a580290758443a0fcc37d8ff615f3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 01 14:51:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-95e0fe7b94bb83bc94b4c1c35c5eaddf9090fb14fab6b13f63bf11d8935b552b-merged.mount: Deactivated successfully.
Feb 01 14:51:46 compute-0 podman[94602]: 2026-02-01 14:51:46.288653048 +0000 UTC m=+0.213285584 container remove f218575825c2ad991021a51c4f06baff0a580290758443a0fcc37d8ff615f3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_buck, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 01 14:51:46 compute-0 systemd[1]: libpod-conmon-f218575825c2ad991021a51c4f06baff0a580290758443a0fcc37d8ff615f3ff.scope: Deactivated successfully.
Feb 01 14:51:46 compute-0 systemd[1]: Reloading.
Feb 01 14:51:46 compute-0 systemd-rc-local-generator[94775]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:51:46 compute-0 systemd-sysv-generator[94780]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:51:46 compute-0 systemd[1]: Reloading.
Feb 01 14:51:46 compute-0 systemd-sysv-generator[94848]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:51:46 compute-0 systemd-rc-local-generator[94842]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:51:46 compute-0 sudo[94814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dukqnawcrqsyohytlpmfvohpxvljbyrb ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769957506.097269-36690-130780346629463/async_wrapper.py j602250558024 30 /home/zuul/.ansible/tmp/ansible-tmp-1769957506.097269-36690-130780346629463/AnsiballZ_command.py _'
Feb 01 14:51:46 compute-0 sudo[94814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:46 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.eusbkm for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb 01 14:51:47 compute-0 ceph-mon[75179]: Deploying daemon rgw.rgw.compute-0.eusbkm on compute-0
Feb 01 14:51:47 compute-0 ceph-mon[75179]: pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:47 compute-0 ansible-async_wrapper.py[94857]: Invoked with j602250558024 30 /home/zuul/.ansible/tmp/ansible-tmp-1769957506.097269-36690-130780346629463/AnsiballZ_command.py _
Feb 01 14:51:47 compute-0 ansible-async_wrapper.py[94886]: Starting module and watcher
Feb 01 14:51:47 compute-0 ansible-async_wrapper.py[94886]: Start watching 94887 (30)
Feb 01 14:51:47 compute-0 ansible-async_wrapper.py[94887]: Start module (94887)
Feb 01 14:51:47 compute-0 ansible-async_wrapper.py[94857]: Return async_wrapper task started.
Feb 01 14:51:47 compute-0 sudo[94814]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:47 compute-0 podman[94910]: 2026-02-01 14:51:47.225013853 +0000 UTC m=+0.058651304 container create 5a12c18d2f7906fde337c42c6cd3f20ec687c5a4d1e621135f618856254cf060 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-rgw-rgw-compute-0-eusbkm, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 01 14:51:47 compute-0 python3[94893]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488b4c4102f558e4ecfce0447c4bd5716e7fadcc70bd8d2d40cf2d9e5e2b6d6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488b4c4102f558e4ecfce0447c4bd5716e7fadcc70bd8d2d40cf2d9e5e2b6d6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488b4c4102f558e4ecfce0447c4bd5716e7fadcc70bd8d2d40cf2d9e5e2b6d6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488b4c4102f558e4ecfce0447c4bd5716e7fadcc70bd8d2d40cf2d9e5e2b6d6f/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.eusbkm supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:47 compute-0 podman[94910]: 2026-02-01 14:51:47.281661 +0000 UTC m=+0.115298491 container init 5a12c18d2f7906fde337c42c6cd3f20ec687c5a4d1e621135f618856254cf060 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-rgw-rgw-compute-0-eusbkm, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 01 14:51:47 compute-0 podman[94910]: 2026-02-01 14:51:47.291265341 +0000 UTC m=+0.124902802 container start 5a12c18d2f7906fde337c42c6cd3f20ec687c5a4d1e621135f618856254cf060 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-rgw-rgw-compute-0-eusbkm, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 01 14:51:47 compute-0 bash[94910]: 5a12c18d2f7906fde337c42c6cd3f20ec687c5a4d1e621135f618856254cf060
Feb 01 14:51:47 compute-0 podman[94910]: 2026-02-01 14:51:47.204845545 +0000 UTC m=+0.038483026 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:47 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.eusbkm for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb 01 14:51:47 compute-0 podman[94926]: 2026-02-01 14:51:47.351390146 +0000 UTC m=+0.076846428 container create ba0c35ba63526379f3d14221bdc436448203da14efb06e0934f2a09a9532593a (image=quay.io/ceph/ceph:v20, name=naughty_swirles, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Feb 01 14:51:47 compute-0 radosgw[94941]: deferred set uid:gid to 167:167 (ceph:ceph)
Feb 01 14:51:47 compute-0 radosgw[94941]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Feb 01 14:51:47 compute-0 radosgw[94941]: framework: beast
Feb 01 14:51:47 compute-0 radosgw[94941]: framework conf key: endpoint, val: 192.168.122.100:8082
Feb 01 14:51:47 compute-0 radosgw[94941]: init_numa not setting numa affinity
Feb 01 14:51:47 compute-0 sudo[94533]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:51:47 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:51:47 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb 01 14:51:47 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:47 compute-0 ceph-mgr[75469]: [progress INFO root] complete: finished ev 886d3577-b177-476c-87ab-959186f1d739 (Updating rgw.rgw deployment (+1 -> 1))
Feb 01 14:51:47 compute-0 ceph-mgr[75469]: [progress INFO root] Completed event 886d3577-b177-476c-87ab-959186f1d739 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Feb 01 14:51:47 compute-0 ceph-mgr[75469]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Feb 01 14:51:47 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Feb 01 14:51:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb 01 14:51:47 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb 01 14:51:47 compute-0 podman[94926]: 2026-02-01 14:51:47.310258856 +0000 UTC m=+0.035715168 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:47 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:47 compute-0 ceph-mgr[75469]: [progress INFO root] update: starting ev bf815f15-3eda-4b12-8174-8780c3db2bc7 (Updating mds.cephfs deployment (+1 -> 1))
Feb 01 14:51:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.agpbju", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Feb 01 14:51:47 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.agpbju", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 01 14:51:47 compute-0 systemd[1]: Started libpod-conmon-ba0c35ba63526379f3d14221bdc436448203da14efb06e0934f2a09a9532593a.scope.
Feb 01 14:51:47 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.agpbju", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb 01 14:51:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:51:47 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:47 compute-0 ceph-mgr[75469]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.agpbju on compute-0
Feb 01 14:51:47 compute-0 ceph-mgr[75469]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.agpbju on compute-0
Feb 01 14:51:47 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987d948f92e62626acc8b81262c567a5a717a68d6b02f39e496dae37a52d1987/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987d948f92e62626acc8b81262c567a5a717a68d6b02f39e496dae37a52d1987/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:47 compute-0 podman[94926]: 2026-02-01 14:51:47.459289437 +0000 UTC m=+0.184745689 container init ba0c35ba63526379f3d14221bdc436448203da14efb06e0934f2a09a9532593a (image=quay.io/ceph/ceph:v20, name=naughty_swirles, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 01 14:51:47 compute-0 sudo[94976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:47 compute-0 podman[94926]: 2026-02-01 14:51:47.466874491 +0000 UTC m=+0.192330733 container start ba0c35ba63526379f3d14221bdc436448203da14efb06e0934f2a09a9532593a (image=quay.io/ceph/ceph:v20, name=naughty_swirles, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:47 compute-0 podman[94926]: 2026-02-01 14:51:47.471186453 +0000 UTC m=+0.196642725 container attach ba0c35ba63526379f3d14221bdc436448203da14efb06e0934f2a09a9532593a (image=quay.io/ceph/ceph:v20, name=naughty_swirles, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:47 compute-0 sudo[94976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:47 compute-0 sudo[94976]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:47 compute-0 sudo[95003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
Feb 01 14:51:47 compute-0 sudo[95003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:47 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:47 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14251 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 01 14:51:47 compute-0 naughty_swirles[94974]: 
Feb 01 14:51:47 compute-0 naughty_swirles[94974]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb 01 14:51:47 compute-0 systemd[1]: libpod-ba0c35ba63526379f3d14221bdc436448203da14efb06e0934f2a09a9532593a.scope: Deactivated successfully.
Feb 01 14:51:47 compute-0 podman[94926]: 2026-02-01 14:51:47.890502613 +0000 UTC m=+0.615958935 container died ba0c35ba63526379f3d14221bdc436448203da14efb06e0934f2a09a9532593a (image=quay.io/ceph/ceph:v20, name=naughty_swirles, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 01 14:51:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-987d948f92e62626acc8b81262c567a5a717a68d6b02f39e496dae37a52d1987-merged.mount: Deactivated successfully.
Feb 01 14:51:47 compute-0 podman[94926]: 2026-02-01 14:51:47.955333731 +0000 UTC m=+0.680789973 container remove ba0c35ba63526379f3d14221bdc436448203da14efb06e0934f2a09a9532593a (image=quay.io/ceph/ceph:v20, name=naughty_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:47 compute-0 systemd[1]: libpod-conmon-ba0c35ba63526379f3d14221bdc436448203da14efb06e0934f2a09a9532593a.scope: Deactivated successfully.
Feb 01 14:51:47 compute-0 ansible-async_wrapper.py[94887]: Module complete (94887)
Feb 01 14:51:47 compute-0 podman[95102]: 2026-02-01 14:51:47.992715924 +0000 UTC m=+0.048124817 container create aca15acbaa418bc2aa8c01939adf0e66be9777bd88990c52586bec5eed2bd817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 01 14:51:48 compute-0 systemd[1]: Started libpod-conmon-aca15acbaa418bc2aa8c01939adf0e66be9777bd88990c52586bec5eed2bd817.scope.
Feb 01 14:51:48 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:48 compute-0 podman[95102]: 2026-02-01 14:51:47.973149823 +0000 UTC m=+0.028558696 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:48 compute-0 podman[95102]: 2026-02-01 14:51:48.076000632 +0000 UTC m=+0.131409595 container init aca15acbaa418bc2aa8c01939adf0e66be9777bd88990c52586bec5eed2bd817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:51:48 compute-0 podman[95102]: 2026-02-01 14:51:48.085245073 +0000 UTC m=+0.140653926 container start aca15acbaa418bc2aa8c01939adf0e66be9777bd88990c52586bec5eed2bd817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:48 compute-0 elegant_shannon[95122]: 167 167
Feb 01 14:51:48 compute-0 systemd[1]: libpod-aca15acbaa418bc2aa8c01939adf0e66be9777bd88990c52586bec5eed2bd817.scope: Deactivated successfully.
Feb 01 14:51:48 compute-0 podman[95102]: 2026-02-01 14:51:48.090726127 +0000 UTC m=+0.146134980 container attach aca15acbaa418bc2aa8c01939adf0e66be9777bd88990c52586bec5eed2bd817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:51:48 compute-0 podman[95102]: 2026-02-01 14:51:48.091096408 +0000 UTC m=+0.146505261 container died aca15acbaa418bc2aa8c01939adf0e66be9777bd88990c52586bec5eed2bd817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-28ee7a4e650181f7d374f53b6c5c14da4e7b84ed7302922e7ef36b3b7f44747b-merged.mount: Deactivated successfully.
Feb 01 14:51:48 compute-0 podman[95102]: 2026-02-01 14:51:48.120370453 +0000 UTC m=+0.175779306 container remove aca15acbaa418bc2aa8c01939adf0e66be9777bd88990c52586bec5eed2bd817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 01 14:51:48 compute-0 systemd[1]: libpod-conmon-aca15acbaa418bc2aa8c01939adf0e66be9777bd88990c52586bec5eed2bd817.scope: Deactivated successfully.
Feb 01 14:51:48 compute-0 systemd[1]: Reloading.
Feb 01 14:51:48 compute-0 systemd-sysv-generator[95219]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:51:48 compute-0 systemd-rc-local-generator[95215]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:51:48 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Feb 01 14:51:48 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Feb 01 14:51:48 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Feb 01 14:51:48 compute-0 sudo[95189]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brmrlmrurjojooxtksrzbujqbbetmbke ; /usr/bin/python3'
Feb 01 14:51:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:48 compute-0 ceph-mon[75179]: Saving service rgw.rgw spec with placement compute-0
Feb 01 14:51:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.agpbju", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 01 14:51:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.agpbju", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb 01 14:51:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:48 compute-0 ceph-mon[75179]: Deploying daemon mds.cephfs.compute-0.agpbju on compute-0
Feb 01 14:51:48 compute-0 ceph-mon[75179]: pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:48 compute-0 ceph-mon[75179]: from='client.14251 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 01 14:51:48 compute-0 sudo[95189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:48 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Feb 01 14:51:48 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1252107850' entity='client.rgw.rgw.compute-0.eusbkm' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Feb 01 14:51:48 compute-0 systemd[1]: Reloading.
Feb 01 14:51:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:51:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:51:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:51:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:51:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:51:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:51:48 compute-0 ceph-mgr[75469]: [progress INFO root] Writing back 4 completed events
Feb 01 14:51:48 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 01 14:51:48 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:48 compute-0 ceph-mgr[75469]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Feb 01 14:51:48 compute-0 systemd-sysv-generator[95260]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:51:48 compute-0 systemd-rc-local-generator[95256]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:51:48 compute-0 python3[95227]: ansible-ansible.legacy.async_status Invoked with jid=j602250558024.94857 mode=status _async_dir=/root/.ansible_async
Feb 01 14:51:48 compute-0 sudo[95189]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:48 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.agpbju for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f...
Feb 01 14:51:48 compute-0 sudo[95323]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfsqwcnwtnszptbonmhrnqmpfcpgcrwc ; /usr/bin/python3'
Feb 01 14:51:48 compute-0 sudo[95323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:48 compute-0 python3[95335]: ansible-ansible.legacy.async_status Invoked with jid=j602250558024.94857 mode=cleanup _async_dir=/root/.ansible_async
Feb 01 14:51:48 compute-0 sudo[95323]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:48 compute-0 podman[95363]: 2026-02-01 14:51:48.909737145 +0000 UTC m=+0.057927344 container create 7ea15bdd3bc5678f8ea492ec361549bccc66be4c197f27f7f341b6ace525728d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mds-cephfs-compute-0-agpbju, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 01 14:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12acec24b0927118a765720bd450011c6b0e040bf426934451e1e450a0d65aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12acec24b0927118a765720bd450011c6b0e040bf426934451e1e450a0d65aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12acec24b0927118a765720bd450011c6b0e040bf426934451e1e450a0d65aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12acec24b0927118a765720bd450011c6b0e040bf426934451e1e450a0d65aa/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.agpbju supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:48 compute-0 podman[95363]: 2026-02-01 14:51:48.882797205 +0000 UTC m=+0.030987494 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:48 compute-0 podman[95363]: 2026-02-01 14:51:48.987876637 +0000 UTC m=+0.136066926 container init 7ea15bdd3bc5678f8ea492ec361549bccc66be4c197f27f7f341b6ace525728d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mds-cephfs-compute-0-agpbju, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 01 14:51:48 compute-0 podman[95363]: 2026-02-01 14:51:48.994424042 +0000 UTC m=+0.142614281 container start 7ea15bdd3bc5678f8ea492ec361549bccc66be4c197f27f7f341b6ace525728d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mds-cephfs-compute-0-agpbju, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:48 compute-0 bash[95363]: 7ea15bdd3bc5678f8ea492ec361549bccc66be4c197f27f7f341b6ace525728d
Feb 01 14:51:49 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.agpbju for 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f.
Feb 01 14:51:49 compute-0 sudo[95003]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:49 compute-0 ceph-mds[95382]: set uid:gid to 167:167 (ceph:ceph)
Feb 01 14:51:49 compute-0 ceph-mds[95382]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Feb 01 14:51:49 compute-0 ceph-mds[95382]: main not setting numa affinity
Feb 01 14:51:49 compute-0 ceph-mds[95382]: pidfile_write: ignore empty --pid-file
Feb 01 14:51:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:51:49 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mds-cephfs-compute-0-agpbju[95378]: starting mds.cephfs.compute-0.agpbju at 
Feb 01 14:51:49 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:49 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju Updating MDS map to version 2 from mon.0
Feb 01 14:51:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:51:49 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb 01 14:51:49 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:49 compute-0 ceph-mgr[75469]: [progress INFO root] complete: finished ev bf815f15-3eda-4b12-8174-8780c3db2bc7 (Updating mds.cephfs deployment (+1 -> 1))
Feb 01 14:51:49 compute-0 ceph-mgr[75469]: [progress INFO root] Completed event bf815f15-3eda-4b12-8174-8780c3db2bc7 (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Feb 01 14:51:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Feb 01 14:51:49 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb 01 14:51:49 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 30 pg[8.0( empty local-lis/les=0/0 n=0 ec=30/30 lis/c=0/0 les/c/f=0/0/0 sis=30) [1] r=0 lpr=30 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:51:49 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:49 compute-0 sudo[95401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 14:51:49 compute-0 sudo[95401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:49 compute-0 sudo[95401]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:49 compute-0 sudo[95426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:49 compute-0 sudo[95426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:49 compute-0 sudo[95426]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:49 compute-0 sudo[95451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 01 14:51:49 compute-0 sudo[95451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:49 compute-0 sudo[95499]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnwrxwiyxjtsaptkvyspjehvqkpgrqei ; /usr/bin/python3'
Feb 01 14:51:49 compute-0 sudo[95499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Feb 01 14:51:49 compute-0 ceph-mon[75179]: osdmap e30: 3 total, 3 up, 3 in
Feb 01 14:51:49 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1252107850' entity='client.rgw.rgw.compute-0.eusbkm' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Feb 01 14:51:49 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:49 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:49 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:49 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:49 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:49 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:49 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1252107850' entity='client.rgw.rgw.compute-0.eusbkm' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Feb 01 14:51:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Feb 01 14:51:49 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Feb 01 14:51:49 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 31 pg[8.0( empty local-lis/les=30/31 n=0 ec=30/30 lis/c=0/0 les/c/f=0/0/0 sis=30) [1] r=0 lpr=30 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:51:49 compute-0 python3[95501]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:49 compute-0 podman[96035]: 2026-02-01 14:51:49.553207643 +0000 UTC m=+0.039086903 container create 8ae2f45fb7ac3cd38d1d6eb6389d583e8c333c238f83026bc6b832fdc1217dd2 (image=quay.io/ceph/ceph:v20, name=amazing_joliot, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:49 compute-0 systemd[1]: Started libpod-conmon-8ae2f45fb7ac3cd38d1d6eb6389d583e8c333c238f83026bc6b832fdc1217dd2.scope.
Feb 01 14:51:49 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ca017340c87f17d380def908ee672e17846be7a8e74639f596390bd63ab46b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ca017340c87f17d380def908ee672e17846be7a8e74639f596390bd63ab46b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:49 compute-0 podman[96035]: 2026-02-01 14:51:49.608776739 +0000 UTC m=+0.094656019 container init 8ae2f45fb7ac3cd38d1d6eb6389d583e8c333c238f83026bc6b832fdc1217dd2 (image=quay.io/ceph/ceph:v20, name=amazing_joliot, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 01 14:51:49 compute-0 podman[96035]: 2026-02-01 14:51:49.615016965 +0000 UTC m=+0.100896225 container start 8ae2f45fb7ac3cd38d1d6eb6389d583e8c333c238f83026bc6b832fdc1217dd2 (image=quay.io/ceph/ceph:v20, name=amazing_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 01 14:51:49 compute-0 podman[96035]: 2026-02-01 14:51:49.618610037 +0000 UTC m=+0.104489297 container attach 8ae2f45fb7ac3cd38d1d6eb6389d583e8c333c238f83026bc6b832fdc1217dd2 (image=quay.io/ceph/ceph:v20, name=amazing_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 01 14:51:49 compute-0 podman[96035]: 2026-02-01 14:51:49.537242233 +0000 UTC m=+0.023121503 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:49 compute-0 podman[96127]: 2026-02-01 14:51:49.675985564 +0000 UTC m=+0.048987232 container exec 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 01 14:51:49 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v68: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:49 compute-0 podman[96127]: 2026-02-01 14:51:49.857546912 +0000 UTC m=+0.230548580 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 01 14:51:50 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 01 14:51:50 compute-0 amazing_joliot[96111]: 
Feb 01 14:51:50 compute-0 amazing_joliot[96111]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb 01 14:51:50 compute-0 systemd[1]: libpod-8ae2f45fb7ac3cd38d1d6eb6389d583e8c333c238f83026bc6b832fdc1217dd2.scope: Deactivated successfully.
Feb 01 14:51:50 compute-0 podman[96035]: 2026-02-01 14:51:50.033383189 +0000 UTC m=+0.519262449 container died 8ae2f45fb7ac3cd38d1d6eb6389d583e8c333c238f83026bc6b832fdc1217dd2 (image=quay.io/ceph/ceph:v20, name=amazing_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:51:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1ca017340c87f17d380def908ee672e17846be7a8e74639f596390bd63ab46b-merged.mount: Deactivated successfully.
Feb 01 14:51:50 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju Updating MDS map to version 3 from mon.0
Feb 01 14:51:50 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju Monitors have assigned me to become a standby
Feb 01 14:51:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).mds e3 new map
Feb 01 14:51:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2026-02-01T14:51:50:072446+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-02-01T14:51:37.585458+0000
                                           modified        2026-02-01T14:51:37.585459+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.agpbju{-1:14253} state up:standby seq 1 addr [v2:192.168.122.100:6814/2861425497,v1:192.168.122.100:6815/2861425497] compat {c=[1],r=[1],i=[1fff]}]
Feb 01 14:51:50 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2861425497,v1:192.168.122.100:6815/2861425497] up:boot
Feb 01 14:51:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/2861425497,v1:192.168.122.100:6815/2861425497] as mds.0
Feb 01 14:51:50 compute-0 ceph-mon[75179]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.agpbju assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Feb 01 14:51:50 compute-0 ceph-mon[75179]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Feb 01 14:51:50 compute-0 ceph-mon[75179]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Feb 01 14:51:50 compute-0 ceph-mon[75179]: log_channel(cluster) log [INF] : Cluster is now healthy
Feb 01 14:51:50 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Feb 01 14:51:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.agpbju"} v 0)
Feb 01 14:51:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.agpbju"} : dispatch
Feb 01 14:51:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).mds e3 all = 0
Feb 01 14:51:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).mds e4 new map
Feb 01 14:51:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2026-02-01T14:51:50:079861+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-02-01T14:51:37.585458+0000
                                           modified        2026-02-01T14:51:50.079856+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14253}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-0.agpbju{0:14253} state up:creating seq 1 addr [v2:192.168.122.100:6814/2861425497,v1:192.168.122.100:6815/2861425497] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Feb 01 14:51:50 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.agpbju=up:creating}
Feb 01 14:51:50 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju Updating MDS map to version 4 from mon.0
Feb 01 14:51:50 compute-0 ceph-mds[95382]: mds.0.4 handle_mds_map I am now mds.0.4
Feb 01 14:51:50 compute-0 ceph-mds[95382]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Feb 01 14:51:50 compute-0 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x1
Feb 01 14:51:50 compute-0 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x100
Feb 01 14:51:50 compute-0 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x600
Feb 01 14:51:50 compute-0 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x601
Feb 01 14:51:50 compute-0 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x602
Feb 01 14:51:50 compute-0 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x603
Feb 01 14:51:50 compute-0 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x604
Feb 01 14:51:50 compute-0 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x605
Feb 01 14:51:50 compute-0 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x606
Feb 01 14:51:50 compute-0 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x607
Feb 01 14:51:50 compute-0 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x608
Feb 01 14:51:50 compute-0 ceph-mds[95382]: mds.0.cache creating system inode with ino:0x609
Feb 01 14:51:50 compute-0 podman[96035]: 2026-02-01 14:51:50.104427391 +0000 UTC m=+0.590306661 container remove 8ae2f45fb7ac3cd38d1d6eb6389d583e8c333c238f83026bc6b832fdc1217dd2 (image=quay.io/ceph/ceph:v20, name=amazing_joliot, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 01 14:51:50 compute-0 systemd[1]: libpod-conmon-8ae2f45fb7ac3cd38d1d6eb6389d583e8c333c238f83026bc6b832fdc1217dd2.scope: Deactivated successfully.
Feb 01 14:51:50 compute-0 ceph-mds[95382]: mds.0.4 creating_done
Feb 01 14:51:50 compute-0 ceph-mon[75179]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.agpbju is now active in filesystem cephfs as rank 0
Feb 01 14:51:50 compute-0 sudo[95499]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:51:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Feb 01 14:51:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Feb 01 14:51:50 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1252107850' entity='client.rgw.rgw.compute-0.eusbkm' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Feb 01 14:51:50 compute-0 ceph-mon[75179]: osdmap e31: 3 total, 3 up, 3 in
Feb 01 14:51:50 compute-0 ceph-mon[75179]: pgmap v68: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:51:50 compute-0 ceph-mon[75179]: from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 01 14:51:50 compute-0 ceph-mon[75179]: mds.? [v2:192.168.122.100:6814/2861425497,v1:192.168.122.100:6815/2861425497] up:boot
Feb 01 14:51:50 compute-0 ceph-mon[75179]: daemon mds.cephfs.compute-0.agpbju assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Feb 01 14:51:50 compute-0 ceph-mon[75179]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Feb 01 14:51:50 compute-0 ceph-mon[75179]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Feb 01 14:51:50 compute-0 ceph-mon[75179]: Cluster is now healthy
Feb 01 14:51:50 compute-0 ceph-mon[75179]: fsmap cephfs:0 1 up:standby
Feb 01 14:51:50 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.agpbju"} : dispatch
Feb 01 14:51:50 compute-0 ceph-mon[75179]: fsmap cephfs:1 {0=cephfs.compute-0.agpbju=up:creating}
Feb 01 14:51:50 compute-0 ceph-mon[75179]: daemon mds.cephfs.compute-0.agpbju is now active in filesystem cephfs as rank 0
Feb 01 14:51:50 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Feb 01 14:51:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Feb 01 14:51:50 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Feb 01 14:51:50 compute-0 sudo[95451]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:51:50 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:51:50 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:51:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 14:51:50 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:51:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 14:51:50 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 14:51:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:51:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 14:51:50 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:51:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:51:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:50 compute-0 sudo[96357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:50 compute-0 sudo[96357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:50 compute-0 sudo[96357]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:50 compute-0 sudo[96405]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehatsrjakhkzedfrgciorliavsexgreb ; /usr/bin/python3'
Feb 01 14:51:50 compute-0 sudo[96405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:50 compute-0 sudo[96406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 14:51:50 compute-0 sudo[96406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:50 compute-0 python3[96415]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:50 compute-0 podman[96433]: 2026-02-01 14:51:50.88160007 +0000 UTC m=+0.032550009 container create a17697f0d79390b8af6c5693cdce4891f407bea93062b7324de58412ec70685e (image=quay.io/ceph/ceph:v20, name=condescending_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:50 compute-0 systemd[1]: Started libpod-conmon-a17697f0d79390b8af6c5693cdce4891f407bea93062b7324de58412ec70685e.scope.
Feb 01 14:51:50 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32990e18280e2ca3315ad8cc98f80d39132172514690ee000cc1e60c7e71be7a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32990e18280e2ca3315ad8cc98f80d39132172514690ee000cc1e60c7e71be7a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:50 compute-0 podman[96433]: 2026-02-01 14:51:50.957036346 +0000 UTC m=+0.107986295 container init a17697f0d79390b8af6c5693cdce4891f407bea93062b7324de58412ec70685e (image=quay.io/ceph/ceph:v20, name=condescending_mclean, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:51:50 compute-0 podman[96433]: 2026-02-01 14:51:50.961849792 +0000 UTC m=+0.112799731 container start a17697f0d79390b8af6c5693cdce4891f407bea93062b7324de58412ec70685e (image=quay.io/ceph/ceph:v20, name=condescending_mclean, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 01 14:51:50 compute-0 podman[96433]: 2026-02-01 14:51:50.964775134 +0000 UTC m=+0.115725113 container attach a17697f0d79390b8af6c5693cdce4891f407bea93062b7324de58412ec70685e (image=quay.io/ceph/ceph:v20, name=condescending_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:50 compute-0 podman[96433]: 2026-02-01 14:51:50.869027435 +0000 UTC m=+0.019977394 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:50 compute-0 podman[96463]: 2026-02-01 14:51:50.979659494 +0000 UTC m=+0.044193367 container create 6eb3a02da4c5147ae8b4a7c7d5bb7008f1dc32db7a60651538a3c315d55fca0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_elion, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:51 compute-0 systemd[1]: Started libpod-conmon-6eb3a02da4c5147ae8b4a7c7d5bb7008f1dc32db7a60651538a3c315d55fca0b.scope.
Feb 01 14:51:51 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:51 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 32 pg[9.0( empty local-lis/les=0/0 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:51:51 compute-0 podman[96463]: 2026-02-01 14:51:51.047854456 +0000 UTC m=+0.112388379 container init 6eb3a02da4c5147ae8b4a7c7d5bb7008f1dc32db7a60651538a3c315d55fca0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_elion, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:51:51 compute-0 podman[96463]: 2026-02-01 14:51:51.052200789 +0000 UTC m=+0.116734622 container start 6eb3a02da4c5147ae8b4a7c7d5bb7008f1dc32db7a60651538a3c315d55fca0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Feb 01 14:51:51 compute-0 zealous_elion[96481]: 167 167
Feb 01 14:51:51 compute-0 systemd[1]: libpod-6eb3a02da4c5147ae8b4a7c7d5bb7008f1dc32db7a60651538a3c315d55fca0b.scope: Deactivated successfully.
Feb 01 14:51:51 compute-0 podman[96463]: 2026-02-01 14:51:51.055140382 +0000 UTC m=+0.119674295 container attach 6eb3a02da4c5147ae8b4a7c7d5bb7008f1dc32db7a60651538a3c315d55fca0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_elion, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:51 compute-0 podman[96463]: 2026-02-01 14:51:51.055480561 +0000 UTC m=+0.120014424 container died 6eb3a02da4c5147ae8b4a7c7d5bb7008f1dc32db7a60651538a3c315d55fca0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_elion, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:51 compute-0 podman[96463]: 2026-02-01 14:51:50.962798279 +0000 UTC m=+0.027332132 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-4eb5352217bdbbfc855404e8f453b5badcf7112aad64fb0281ca9668239483ae-merged.mount: Deactivated successfully.
Feb 01 14:51:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).mds e5 new map
Feb 01 14:51:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2026-02-01T14:51:51:083176+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-02-01T14:51:37.585458+0000
                                           modified        2026-02-01T14:51:51.083173+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14253}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 14253 members: 14253
                                           [mds.cephfs.compute-0.agpbju{0:14253} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/2861425497,v1:192.168.122.100:6815/2861425497] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Feb 01 14:51:51 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju Updating MDS map to version 5 from mon.0
Feb 01 14:51:51 compute-0 ceph-mds[95382]: mds.0.4 handle_mds_map I am now mds.0.4
Feb 01 14:51:51 compute-0 ceph-mds[95382]: mds.0.4 handle_mds_map state change up:creating --> up:active
Feb 01 14:51:51 compute-0 ceph-mds[95382]: mds.0.4 recovery_done -- successful recovery!
Feb 01 14:51:51 compute-0 ceph-mds[95382]: mds.0.4 active_start
Feb 01 14:51:51 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2861425497,v1:192.168.122.100:6815/2861425497] up:active
Feb 01 14:51:51 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.agpbju=up:active}
Feb 01 14:51:51 compute-0 podman[96463]: 2026-02-01 14:51:51.097502076 +0000 UTC m=+0.162035939 container remove 6eb3a02da4c5147ae8b4a7c7d5bb7008f1dc32db7a60651538a3c315d55fca0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_elion, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:51 compute-0 systemd[1]: libpod-conmon-6eb3a02da4c5147ae8b4a7c7d5bb7008f1dc32db7a60651538a3c315d55fca0b.scope: Deactivated successfully.
Feb 01 14:51:51 compute-0 podman[96528]: 2026-02-01 14:51:51.230853745 +0000 UTC m=+0.046278766 container create aa208e02108fbfabfa8c7f5d42b22a0b612ebc03fed90390f284607a544e208a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_archimedes, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 01 14:51:51 compute-0 systemd[1]: Started libpod-conmon-aa208e02108fbfabfa8c7f5d42b22a0b612ebc03fed90390f284607a544e208a.scope.
Feb 01 14:51:51 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:51 compute-0 podman[96528]: 2026-02-01 14:51:51.215631546 +0000 UTC m=+0.031056587 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ca05827000d027791b2718ecb612744794bc3a668d619b7eadefbf7060c2c6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ca05827000d027791b2718ecb612744794bc3a668d619b7eadefbf7060c2c6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ca05827000d027791b2718ecb612744794bc3a668d619b7eadefbf7060c2c6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ca05827000d027791b2718ecb612744794bc3a668d619b7eadefbf7060c2c6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ca05827000d027791b2718ecb612744794bc3a668d619b7eadefbf7060c2c6b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:51 compute-0 podman[96528]: 2026-02-01 14:51:51.328382994 +0000 UTC m=+0.143808075 container init aa208e02108fbfabfa8c7f5d42b22a0b612ebc03fed90390f284607a544e208a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_archimedes, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 01 14:51:51 compute-0 podman[96528]: 2026-02-01 14:51:51.33957453 +0000 UTC m=+0.154999551 container start aa208e02108fbfabfa8c7f5d42b22a0b612ebc03fed90390f284607a544e208a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:51 compute-0 podman[96528]: 2026-02-01 14:51:51.342517753 +0000 UTC m=+0.157942824 container attach aa208e02108fbfabfa8c7f5d42b22a0b612ebc03fed90390f284607a544e208a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_archimedes, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:51 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 01 14:51:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.eusbkm", "name": "rgw_frontends"} v 0)
Feb 01 14:51:51 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.eusbkm", "name": "rgw_frontends"} : dispatch
Feb 01 14:51:51 compute-0 condescending_mclean[96456]: 
Feb 01 14:51:51 compute-0 condescending_mclean[96456]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Feb 01 14:51:51 compute-0 systemd[1]: libpod-a17697f0d79390b8af6c5693cdce4891f407bea93062b7324de58412ec70685e.scope: Deactivated successfully.
Feb 01 14:51:51 compute-0 conmon[96456]: conmon a17697f0d79390b8af6c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a17697f0d79390b8af6c5693cdce4891f407bea93062b7324de58412ec70685e.scope/container/memory.events
Feb 01 14:51:51 compute-0 podman[96433]: 2026-02-01 14:51:51.371582532 +0000 UTC m=+0.522532481 container died a17697f0d79390b8af6c5693cdce4891f407bea93062b7324de58412ec70685e (image=quay.io/ceph/ceph:v20, name=condescending_mclean, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-32990e18280e2ca3315ad8cc98f80d39132172514690ee000cc1e60c7e71be7a-merged.mount: Deactivated successfully.
Feb 01 14:51:51 compute-0 podman[96433]: 2026-02-01 14:51:51.416327353 +0000 UTC m=+0.567277322 container remove a17697f0d79390b8af6c5693cdce4891f407bea93062b7324de58412ec70685e (image=quay.io/ceph/ceph:v20, name=condescending_mclean, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Feb 01 14:51:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Feb 01 14:51:51 compute-0 systemd[1]: libpod-conmon-a17697f0d79390b8af6c5693cdce4891f407bea93062b7324de58412ec70685e.scope: Deactivated successfully.
Feb 01 14:51:51 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Feb 01 14:51:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Feb 01 14:51:51 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Feb 01 14:51:51 compute-0 sudo[96405]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:51 compute-0 ceph-mon[75179]: osdmap e32: 3 total, 3 up, 3 in
Feb 01 14:51:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Feb 01 14:51:51 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:51 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:51 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:51 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:51:51 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:51 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:51:51 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:51:51 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:51 compute-0 ceph-mon[75179]: mds.? [v2:192.168.122.100:6814/2861425497,v1:192.168.122.100:6815/2861425497] up:active
Feb 01 14:51:51 compute-0 ceph-mon[75179]: fsmap cephfs:1 {0=cephfs.compute-0.agpbju=up:active}
Feb 01 14:51:51 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.eusbkm", "name": "rgw_frontends"} : dispatch
Feb 01 14:51:51 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 33 pg[9.0( empty local-lis/les=32/33 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:51:51 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v71: 9 pgs: 1 unknown, 8 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Feb 01 14:51:51 compute-0 jovial_archimedes[96544]: --> passed data devices: 0 physical, 3 LVM
Feb 01 14:51:51 compute-0 jovial_archimedes[96544]: --> All data devices are unavailable
Feb 01 14:51:51 compute-0 systemd[1]: libpod-aa208e02108fbfabfa8c7f5d42b22a0b612ebc03fed90390f284607a544e208a.scope: Deactivated successfully.
Feb 01 14:51:51 compute-0 podman[96528]: 2026-02-01 14:51:51.81350617 +0000 UTC m=+0.628931221 container died aa208e02108fbfabfa8c7f5d42b22a0b612ebc03fed90390f284607a544e208a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_archimedes, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Feb 01 14:51:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ca05827000d027791b2718ecb612744794bc3a668d619b7eadefbf7060c2c6b-merged.mount: Deactivated successfully.
Feb 01 14:51:51 compute-0 podman[96528]: 2026-02-01 14:51:51.864471836 +0000 UTC m=+0.679896887 container remove aa208e02108fbfabfa8c7f5d42b22a0b612ebc03fed90390f284607a544e208a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_archimedes, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 01 14:51:51 compute-0 systemd[1]: libpod-conmon-aa208e02108fbfabfa8c7f5d42b22a0b612ebc03fed90390f284607a544e208a.scope: Deactivated successfully.
Feb 01 14:51:51 compute-0 sudo[96406]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:51 compute-0 sudo[96592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:51 compute-0 sudo[96592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:51 compute-0 sudo[96592]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:52 compute-0 sudo[96617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 14:51:52 compute-0 sudo[96617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:52 compute-0 ansible-async_wrapper.py[94886]: Done in kid B.
Feb 01 14:51:52 compute-0 sudo[96665]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdgtyrpuaioasenskjlbezvsegtfpumw ; /usr/bin/python3'
Feb 01 14:51:52 compute-0 sudo[96665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:52 compute-0 python3[96667]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:52 compute-0 podman[96682]: 2026-02-01 14:51:52.34064443 +0000 UTC m=+0.039891416 container create 28e6004d4fe5b028a219dcadefd4526c1a4390de98a4bedf70d7e88eadacca26 (image=quay.io/ceph/ceph:v20, name=intelligent_rhodes, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 01 14:51:52 compute-0 podman[96680]: 2026-02-01 14:51:52.359004637 +0000 UTC m=+0.058070058 container create 1c66e514cb7f87b5147802817ee3d7514437aff2a59659f3d6e1ecb90d435361 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_banach, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:51:52 compute-0 systemd[1]: Started libpod-conmon-28e6004d4fe5b028a219dcadefd4526c1a4390de98a4bedf70d7e88eadacca26.scope.
Feb 01 14:51:52 compute-0 systemd[1]: Started libpod-conmon-1c66e514cb7f87b5147802817ee3d7514437aff2a59659f3d6e1ecb90d435361.scope.
Feb 01 14:51:52 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:52 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee400e93a8c8fdd4a3deeb5bfc470eccd6714b3c8ac6faafca2950cdd75285f3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee400e93a8c8fdd4a3deeb5bfc470eccd6714b3c8ac6faafca2950cdd75285f3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:52 compute-0 podman[96680]: 2026-02-01 14:51:52.405394795 +0000 UTC m=+0.104460236 container init 1c66e514cb7f87b5147802817ee3d7514437aff2a59659f3d6e1ecb90d435361 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_banach, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:51:52 compute-0 podman[96682]: 2026-02-01 14:51:52.409597873 +0000 UTC m=+0.108844869 container init 28e6004d4fe5b028a219dcadefd4526c1a4390de98a4bedf70d7e88eadacca26 (image=quay.io/ceph/ceph:v20, name=intelligent_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 01 14:51:52 compute-0 podman[96680]: 2026-02-01 14:51:52.409759248 +0000 UTC m=+0.108824669 container start 1c66e514cb7f87b5147802817ee3d7514437aff2a59659f3d6e1ecb90d435361 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_banach, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True)
Feb 01 14:51:52 compute-0 nice_banach[96714]: 167 167
Feb 01 14:51:52 compute-0 podman[96680]: 2026-02-01 14:51:52.412484875 +0000 UTC m=+0.111550296 container attach 1c66e514cb7f87b5147802817ee3d7514437aff2a59659f3d6e1ecb90d435361 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_banach, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 01 14:51:52 compute-0 systemd[1]: libpod-1c66e514cb7f87b5147802817ee3d7514437aff2a59659f3d6e1ecb90d435361.scope: Deactivated successfully.
Feb 01 14:51:52 compute-0 podman[96680]: 2026-02-01 14:51:52.413001439 +0000 UTC m=+0.112066860 container died 1c66e514cb7f87b5147802817ee3d7514437aff2a59659f3d6e1ecb90d435361 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 01 14:51:52 compute-0 podman[96682]: 2026-02-01 14:51:52.413097632 +0000 UTC m=+0.112344608 container start 28e6004d4fe5b028a219dcadefd4526c1a4390de98a4bedf70d7e88eadacca26 (image=quay.io/ceph/ceph:v20, name=intelligent_rhodes, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:52 compute-0 podman[96682]: 2026-02-01 14:51:52.31936775 +0000 UTC m=+0.018614726 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:52 compute-0 podman[96680]: 2026-02-01 14:51:52.326707897 +0000 UTC m=+0.025773338 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:52 compute-0 podman[96682]: 2026-02-01 14:51:52.42473802 +0000 UTC m=+0.123985226 container attach 28e6004d4fe5b028a219dcadefd4526c1a4390de98a4bedf70d7e88eadacca26 (image=quay.io/ceph/ceph:v20, name=intelligent_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 01 14:51:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdcd4ba8c444964fbd8990fc9aa6a6b8357097de04497ba368419aee66cab732-merged.mount: Deactivated successfully.
Feb 01 14:51:52 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Feb 01 14:51:52 compute-0 ceph-mon[75179]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 01 14:51:52 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Feb 01 14:51:52 compute-0 ceph-mon[75179]: osdmap e33: 3 total, 3 up, 3 in
Feb 01 14:51:52 compute-0 ceph-mon[75179]: pgmap v71: 9 pgs: 1 unknown, 8 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Feb 01 14:51:52 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Feb 01 14:51:52 compute-0 podman[96680]: 2026-02-01 14:51:52.467961559 +0000 UTC m=+0.167026970 container remove 1c66e514cb7f87b5147802817ee3d7514437aff2a59659f3d6e1ecb90d435361 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_banach, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:51:52 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Feb 01 14:51:52 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Feb 01 14:51:52 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Feb 01 14:51:52 compute-0 systemd[1]: libpod-conmon-1c66e514cb7f87b5147802817ee3d7514437aff2a59659f3d6e1ecb90d435361.scope: Deactivated successfully.
Feb 01 14:51:52 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 34 pg[10.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [2] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:51:52 compute-0 podman[96759]: 2026-02-01 14:51:52.592357815 +0000 UTC m=+0.034775471 container create 6cedb40d96aa098eb0f4c9de252a142442663d2753bc103e317c0704475d7ea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_beaver, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 01 14:51:52 compute-0 systemd[1]: Started libpod-conmon-6cedb40d96aa098eb0f4c9de252a142442663d2753bc103e317c0704475d7ea3.scope.
Feb 01 14:51:52 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6584a7f0267ffcf4a2a458290350426b082cfffaf203167eceb7867a6ee78a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6584a7f0267ffcf4a2a458290350426b082cfffaf203167eceb7867a6ee78a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6584a7f0267ffcf4a2a458290350426b082cfffaf203167eceb7867a6ee78a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6584a7f0267ffcf4a2a458290350426b082cfffaf203167eceb7867a6ee78a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:52 compute-0 podman[96759]: 2026-02-01 14:51:52.577579189 +0000 UTC m=+0.019996885 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:52 compute-0 podman[96759]: 2026-02-01 14:51:52.69008594 +0000 UTC m=+0.132503636 container init 6cedb40d96aa098eb0f4c9de252a142442663d2753bc103e317c0704475d7ea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_beaver, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 01 14:51:52 compute-0 podman[96759]: 2026-02-01 14:51:52.695715339 +0000 UTC m=+0.138133005 container start 6cedb40d96aa098eb0f4c9de252a142442663d2753bc103e317c0704475d7ea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_beaver, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:52 compute-0 podman[96759]: 2026-02-01 14:51:52.698895798 +0000 UTC m=+0.141313484 container attach 6cedb40d96aa098eb0f4c9de252a142442663d2753bc103e317c0704475d7ea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 01 14:51:52 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 01 14:51:52 compute-0 intelligent_rhodes[96712]: 
Feb 01 14:51:52 compute-0 intelligent_rhodes[96712]: [{"container_id": "9bd653623727", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.21%", "created": "2026-02-01T14:50:38.747657Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-02-01T14:50:38.821195Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-01T14:51:50.617737Z", "memory_usage": 7790919, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2026-02-01T14:50:38.651371Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f@crash.compute-0", "version": "20.2.0"}, {"container_id": "7ea15bdd3bc5", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "8.86%", "created": "2026-02-01T14:51:49.007731Z", "daemon_id": "cephfs.compute-0.agpbju", "daemon_name": "mds.cephfs.compute-0.agpbju", "daemon_type": "mds", "events": ["2026-02-01T14:51:49.084836Z daemon:mds.cephfs.compute-0.agpbju [INFO] \"Deployed mds.cephfs.compute-0.agpbju on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-01T14:51:50.618162Z", "memory_usage": 13537116, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2026-02-01T14:51:48.892159Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f@mds.cephfs.compute-0.agpbju", "version": "20.2.0"}, {"container_id": "c0b520f4a011", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "17.86%", "created": "2026-02-01T14:50:02.127621Z", "daemon_id": "compute-0.viosrg", "daemon_name": "mgr.compute-0.viosrg", "daemon_type": "mgr", "events": ["2026-02-01T14:50:42.944653Z daemon:mgr.compute-0.viosrg [INFO] \"Reconfigured mgr.compute-0.viosrg on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-01T14:51:50.617666Z", "memory_usage": 546203238, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-02-01T14:50:02.023366Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f@mgr.compute-0.viosrg", "version": "20.2.0"}, {"container_id": "75630865abcd", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.93%", "created": "2026-02-01T14:49:58.016505Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-02-01T14:50:42.376548Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-01T14:51:50.617574Z", "memory_request": 2147483648, "memory_usage": 40433090, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2026-02-01T14:50:00.222597Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f@mon.compute-0", "version": "20.2.0"}, {"container_id": "88ca06885fff", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.43%", "created": "2026-02-01T14:51:00.928311Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-02-01T14:51:00.991275Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-01T14:51:50.617818Z", "memory_request": 4294967296, "memory_usage": 58615398, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-01T14:51:00.858120Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f@osd.0", "version": "20.2.0"}, {"container_id": "751c852b5ece", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.69%", "created": "2026-02-01T14:51:04.396414Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-02-01T14:51:04.483113Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-01T14:51:50.617922Z", "memory_request": 4294967296, "memory_usage": 57807994, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-01T14:51:04.287775Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f@osd.1", "version": "20.2.0"}, {"container_id": "e57f55d1e39c", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.84%", "created": "2026-02-01T14:51:08.085523Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-02-01T14:51:08.160346Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-01T14:51:50.617991Z", "memory_request": 4294967296, "memory_usage": 56088330, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-01T14:51:07.966157Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f@osd.2", "version": "20.2.0"}, {"container_id": "5a12c18d2f79", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "6.18%", "created": "2026-02-01T14:51:47.312683Z", "daemon_id": "rgw.compute-0.eusbkm", "daemon_name": "rgw.rgw.compute-0.eusbkm", "daemon_type": "rgw", "events": ["2026-02-01T14:51:47.388931Z daemon:rgw.rgw.compute-0.eusbkm [INFO] \"Deployed rgw.rgw.compute-0.eusbkm on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "last_refresh": "2026-02-01T14:51:50.618062Z", "memory_usage": 56392417, "pending_daemon_config": true, "ports": [8082], "service_name": "rgw.rgw", "started": "2026-02-01T14:51:47.210469Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f@rgw.rgw.compute-0.eusbkm", "version": "20.2.0"}]
Feb 01 14:51:52 compute-0 systemd[1]: libpod-28e6004d4fe5b028a219dcadefd4526c1a4390de98a4bedf70d7e88eadacca26.scope: Deactivated successfully.
Feb 01 14:51:52 compute-0 podman[96682]: 2026-02-01 14:51:52.814994931 +0000 UTC m=+0.514241947 container died 28e6004d4fe5b028a219dcadefd4526c1a4390de98a4bedf70d7e88eadacca26 (image=quay.io/ceph/ceph:v20, name=intelligent_rhodes, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee400e93a8c8fdd4a3deeb5bfc470eccd6714b3c8ac6faafca2950cdd75285f3-merged.mount: Deactivated successfully.
Feb 01 14:51:52 compute-0 podman[96682]: 2026-02-01 14:51:52.854348801 +0000 UTC m=+0.553595817 container remove 28e6004d4fe5b028a219dcadefd4526c1a4390de98a4bedf70d7e88eadacca26 (image=quay.io/ceph/ceph:v20, name=intelligent_rhodes, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 01 14:51:52 compute-0 rsyslogd[1001]: message too long (8842) with configured size 8096, begin of message is: [{"container_id": "9bd653623727", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 01 14:51:52 compute-0 systemd[1]: libpod-conmon-28e6004d4fe5b028a219dcadefd4526c1a4390de98a4bedf70d7e88eadacca26.scope: Deactivated successfully.
Feb 01 14:51:52 compute-0 sudo[96665]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:52 compute-0 fervent_beaver[96776]: {
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:     "0": [
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:         {
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "devices": [
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "/dev/loop3"
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             ],
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "lv_name": "ceph_lv0",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "lv_size": "21470642176",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "name": "ceph_lv0",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "tags": {
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.cluster_name": "ceph",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.crush_device_class": "",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.encrypted": "0",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.objectstore": "bluestore",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.osd_id": "0",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.type": "block",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.vdo": "0",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.with_tpm": "0"
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             },
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "type": "block",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "vg_name": "ceph_vg0"
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:         }
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:     ],
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:     "1": [
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:         {
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "devices": [
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "/dev/loop4"
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             ],
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "lv_name": "ceph_lv1",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "lv_size": "21470642176",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "name": "ceph_lv1",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "tags": {
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.cluster_name": "ceph",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.crush_device_class": "",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.encrypted": "0",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.objectstore": "bluestore",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.osd_id": "1",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.type": "block",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.vdo": "0",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.with_tpm": "0"
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             },
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "type": "block",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "vg_name": "ceph_vg1"
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:         }
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:     ],
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:     "2": [
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:         {
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "devices": [
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "/dev/loop5"
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             ],
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "lv_name": "ceph_lv2",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "lv_size": "21470642176",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "name": "ceph_lv2",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "tags": {
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.cluster_name": "ceph",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.crush_device_class": "",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.encrypted": "0",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.objectstore": "bluestore",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.osd_id": "2",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.type": "block",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.vdo": "0",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:                 "ceph.with_tpm": "0"
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             },
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "type": "block",
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:             "vg_name": "ceph_vg2"
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:         }
Feb 01 14:51:52 compute-0 fervent_beaver[96776]:     ]
Feb 01 14:51:52 compute-0 fervent_beaver[96776]: }
Feb 01 14:51:52 compute-0 systemd[1]: libpod-6cedb40d96aa098eb0f4c9de252a142442663d2753bc103e317c0704475d7ea3.scope: Deactivated successfully.
Feb 01 14:51:52 compute-0 podman[96759]: 2026-02-01 14:51:52.962790818 +0000 UTC m=+0.405208474 container died 6cedb40d96aa098eb0f4c9de252a142442663d2753bc103e317c0704475d7ea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_beaver, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:51:53 compute-0 podman[96759]: 2026-02-01 14:51:53.002493457 +0000 UTC m=+0.444911123 container remove 6cedb40d96aa098eb0f4c9de252a142442663d2753bc103e317c0704475d7ea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 01 14:51:53 compute-0 systemd[1]: libpod-conmon-6cedb40d96aa098eb0f4c9de252a142442663d2753bc103e317c0704475d7ea3.scope: Deactivated successfully.
Feb 01 14:51:53 compute-0 sudo[96617]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:53 compute-0 sudo[96812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:53 compute-0 sudo[96812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:53 compute-0 sudo[96812]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:53 compute-0 sudo[96837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 14:51:53 compute-0 sudo[96837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba6584a7f0267ffcf4a2a458290350426b082cfffaf203167eceb7867a6ee78a-merged.mount: Deactivated successfully.
Feb 01 14:51:53 compute-0 podman[96874]: 2026-02-01 14:51:53.403673715 +0000 UTC m=+0.040593285 container create 5d1b8c7eeefbbaf4bccce97e9f2881b8e691b481a53898bda4c64e607d641918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_albattani, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 01 14:51:53 compute-0 systemd[1]: Started libpod-conmon-5d1b8c7eeefbbaf4bccce97e9f2881b8e691b481a53898bda4c64e607d641918.scope.
Feb 01 14:51:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Feb 01 14:51:53 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb 01 14:51:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Feb 01 14:51:53 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:53 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Feb 01 14:51:53 compute-0 ceph-mgr[75469]: [progress INFO root] Writing back 5 completed events
Feb 01 14:51:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 01 14:51:53 compute-0 podman[96874]: 2026-02-01 14:51:53.383846616 +0000 UTC m=+0.020766186 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:53 compute-0 ceph-mon[75179]: osdmap e34: 3 total, 3 up, 3 in
Feb 01 14:51:53 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Feb 01 14:51:53 compute-0 ceph-mon[75179]: from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 01 14:51:53 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 35 pg[10.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [2] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:51:53 compute-0 podman[96874]: 2026-02-01 14:51:53.489093763 +0000 UTC m=+0.126013383 container init 5d1b8c7eeefbbaf4bccce97e9f2881b8e691b481a53898bda4c64e607d641918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_albattani, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 01 14:51:53 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:53 compute-0 podman[96874]: 2026-02-01 14:51:53.495073971 +0000 UTC m=+0.131993521 container start 5d1b8c7eeefbbaf4bccce97e9f2881b8e691b481a53898bda4c64e607d641918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_albattani, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:53 compute-0 zen_albattani[96890]: 167 167
Feb 01 14:51:53 compute-0 systemd[1]: libpod-5d1b8c7eeefbbaf4bccce97e9f2881b8e691b481a53898bda4c64e607d641918.scope: Deactivated successfully.
Feb 01 14:51:53 compute-0 podman[96874]: 2026-02-01 14:51:53.500779182 +0000 UTC m=+0.137698742 container attach 5d1b8c7eeefbbaf4bccce97e9f2881b8e691b481a53898bda4c64e607d641918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_albattani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Feb 01 14:51:53 compute-0 podman[96874]: 2026-02-01 14:51:53.501258686 +0000 UTC m=+0.138178236 container died 5d1b8c7eeefbbaf4bccce97e9f2881b8e691b481a53898bda4c64e607d641918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_albattani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 01 14:51:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-76a965885ccef4b9d6d2a05f508c5bc87ba770e12b698928b1d91cea91f9c42e-merged.mount: Deactivated successfully.
Feb 01 14:51:53 compute-0 podman[96874]: 2026-02-01 14:51:53.550216076 +0000 UTC m=+0.187135616 container remove 5d1b8c7eeefbbaf4bccce97e9f2881b8e691b481a53898bda4c64e607d641918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_albattani, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Feb 01 14:51:53 compute-0 systemd[1]: libpod-conmon-5d1b8c7eeefbbaf4bccce97e9f2881b8e691b481a53898bda4c64e607d641918.scope: Deactivated successfully.
Feb 01 14:51:53 compute-0 sudo[96932]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfthuvvpauzjnnlbpgnbyaewvtxtnrie ; /usr/bin/python3'
Feb 01 14:51:53 compute-0 sudo[96932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:53 compute-0 podman[96940]: 2026-02-01 14:51:53.66139592 +0000 UTC m=+0.031770537 container create aabeb04b2034cae6e9ee0465e07b8f1add8ba543ab2e3b700489d14f3a366b6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 01 14:51:53 compute-0 systemd[1]: Started libpod-conmon-aabeb04b2034cae6e9ee0465e07b8f1add8ba543ab2e3b700489d14f3a366b6f.scope.
Feb 01 14:51:53 compute-0 python3[96934]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:53 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v74: 10 pgs: 2 unknown, 8 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Feb 01 14:51:53 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b54dfd8f6875ba6bd2cc796647ee46ffd65901d7ee6c3e129de6902b296e788/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b54dfd8f6875ba6bd2cc796647ee46ffd65901d7ee6c3e129de6902b296e788/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b54dfd8f6875ba6bd2cc796647ee46ffd65901d7ee6c3e129de6902b296e788/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b54dfd8f6875ba6bd2cc796647ee46ffd65901d7ee6c3e129de6902b296e788/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:53 compute-0 podman[96940]: 2026-02-01 14:51:53.739235474 +0000 UTC m=+0.109610111 container init aabeb04b2034cae6e9ee0465e07b8f1add8ba543ab2e3b700489d14f3a366b6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:51:53 compute-0 podman[96940]: 2026-02-01 14:51:53.647157299 +0000 UTC m=+0.017531946 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:53 compute-0 podman[96940]: 2026-02-01 14:51:53.746212511 +0000 UTC m=+0.116587168 container start aabeb04b2034cae6e9ee0465e07b8f1add8ba543ab2e3b700489d14f3a366b6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_elbakyan, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:53 compute-0 podman[96940]: 2026-02-01 14:51:53.749956926 +0000 UTC m=+0.120331573 container attach aabeb04b2034cae6e9ee0465e07b8f1add8ba543ab2e3b700489d14f3a366b6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_elbakyan, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 01 14:51:53 compute-0 podman[96959]: 2026-02-01 14:51:53.763164029 +0000 UTC m=+0.048100207 container create 255bb346abc4acee73fbd573307d8530c4c23aa7a7b9663e67f3d4b67e0cbf91 (image=quay.io/ceph/ceph:v20, name=loving_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 01 14:51:53 compute-0 systemd[1]: Started libpod-conmon-255bb346abc4acee73fbd573307d8530c4c23aa7a7b9663e67f3d4b67e0cbf91.scope.
Feb 01 14:51:53 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/280bc7f9563d50d11d2f8003da972411dc23a938ce086f31050c48bd13a0062b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/280bc7f9563d50d11d2f8003da972411dc23a938ce086f31050c48bd13a0062b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:53 compute-0 podman[96959]: 2026-02-01 14:51:53.747524408 +0000 UTC m=+0.032460616 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:53 compute-0 podman[96959]: 2026-02-01 14:51:53.849874463 +0000 UTC m=+0.134810661 container init 255bb346abc4acee73fbd573307d8530c4c23aa7a7b9663e67f3d4b67e0cbf91 (image=quay.io/ceph/ceph:v20, name=loving_babbage, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Feb 01 14:51:53 compute-0 podman[96959]: 2026-02-01 14:51:53.853926827 +0000 UTC m=+0.138863055 container start 255bb346abc4acee73fbd573307d8530c4c23aa7a7b9663e67f3d4b67e0cbf91 (image=quay.io/ceph/ceph:v20, name=loving_babbage, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:53 compute-0 podman[96959]: 2026-02-01 14:51:53.857375144 +0000 UTC m=+0.142311412 container attach 255bb346abc4acee73fbd573307d8530c4c23aa7a7b9663e67f3d4b67e0cbf91 (image=quay.io/ceph/ceph:v20, name=loving_babbage, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb 01 14:51:54 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/326524861' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 01 14:51:54 compute-0 loving_babbage[96976]: 
Feb 01 14:51:54 compute-0 loving_babbage[96976]: {"fsid":"2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":113,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":35,"num_osds":3,"num_up_osds":3,"osd_up_since":1769957475,"num_in_osds":3,"osd_in_since":1769957454,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":8},{"state_name":"unknown","count":1}],"num_pgs":9,"num_pools":9,"num_objects":29,"data_bytes":463390,"bytes_used":83931136,"bytes_avail":64327995392,"bytes_total":64411926528,"unknown_pgs_ratio":0.1111111119389534,"read_bytes_sec":1279,"write_bytes_sec":5374,"read_op_per_sec":0,"write_op_per_sec":13},"fsmap":{"epoch":5,"btime":"2026-02-01T14:51:51:083176+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.agpbju","status":"up:active","gid":14253}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-02-01T14:51:19.699816+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"2e17c372-c1ad-48d6-8bf0-bbf5585c23cf":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Feb 01 14:51:54 compute-0 lvm[97071]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 14:51:54 compute-0 lvm[97073]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 14:51:54 compute-0 lvm[97073]: VG ceph_vg1 finished
Feb 01 14:51:54 compute-0 lvm[97071]: VG ceph_vg0 finished
Feb 01 14:51:54 compute-0 systemd[1]: libpod-255bb346abc4acee73fbd573307d8530c4c23aa7a7b9663e67f3d4b67e0cbf91.scope: Deactivated successfully.
Feb 01 14:51:54 compute-0 podman[96959]: 2026-02-01 14:51:54.374853332 +0000 UTC m=+0.659789510 container died 255bb346abc4acee73fbd573307d8530c4c23aa7a7b9663e67f3d4b67e0cbf91 (image=quay.io/ceph/ceph:v20, name=loving_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 01 14:51:54 compute-0 lvm[97077]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 14:51:54 compute-0 lvm[97077]: VG ceph_vg2 finished
Feb 01 14:51:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-280bc7f9563d50d11d2f8003da972411dc23a938ce086f31050c48bd13a0062b-merged.mount: Deactivated successfully.
Feb 01 14:51:54 compute-0 podman[96959]: 2026-02-01 14:51:54.416111235 +0000 UTC m=+0.701047423 container remove 255bb346abc4acee73fbd573307d8530c4c23aa7a7b9663e67f3d4b67e0cbf91 (image=quay.io/ceph/ceph:v20, name=loving_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Feb 01 14:51:54 compute-0 systemd[1]: libpod-conmon-255bb346abc4acee73fbd573307d8530c4c23aa7a7b9663e67f3d4b67e0cbf91.scope: Deactivated successfully.
Feb 01 14:51:54 compute-0 sudo[96932]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:54 compute-0 dreamy_elbakyan[96957]: {}
Feb 01 14:51:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Feb 01 14:51:54 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb 01 14:51:54 compute-0 ceph-mon[75179]: osdmap e35: 3 total, 3 up, 3 in
Feb 01 14:51:54 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:54 compute-0 ceph-mon[75179]: pgmap v74: 10 pgs: 2 unknown, 8 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Feb 01 14:51:54 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/326524861' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 01 14:51:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Feb 01 14:51:54 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Feb 01 14:51:54 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 36 pg[11.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:51:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Feb 01 14:51:54 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Feb 01 14:51:54 compute-0 systemd[1]: libpod-aabeb04b2034cae6e9ee0465e07b8f1add8ba543ab2e3b700489d14f3a366b6f.scope: Deactivated successfully.
Feb 01 14:51:54 compute-0 systemd[1]: libpod-aabeb04b2034cae6e9ee0465e07b8f1add8ba543ab2e3b700489d14f3a366b6f.scope: Consumed 1.083s CPU time.
Feb 01 14:51:54 compute-0 podman[96940]: 2026-02-01 14:51:54.520775775 +0000 UTC m=+0.891150432 container died aabeb04b2034cae6e9ee0465e07b8f1add8ba543ab2e3b700489d14f3a366b6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:51:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b54dfd8f6875ba6bd2cc796647ee46ffd65901d7ee6c3e129de6902b296e788-merged.mount: Deactivated successfully.
Feb 01 14:51:54 compute-0 podman[96940]: 2026-02-01 14:51:54.563393047 +0000 UTC m=+0.933767704 container remove aabeb04b2034cae6e9ee0465e07b8f1add8ba543ab2e3b700489d14f3a366b6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_elbakyan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:54 compute-0 systemd[1]: libpod-conmon-aabeb04b2034cae6e9ee0465e07b8f1add8ba543ab2e3b700489d14f3a366b6f.scope: Deactivated successfully.
Feb 01 14:51:54 compute-0 sudo[96837]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:51:54 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:51:54 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:54 compute-0 sudo[97105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 14:51:54 compute-0 sudo[97105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:54 compute-0 sudo[97105]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:51:54 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:51:54 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:54 compute-0 sudo[97130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:54 compute-0 sudo[97130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:54 compute-0 sudo[97130]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:54 compute-0 sudo[97155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 01 14:51:54 compute-0 sudo[97155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:55 compute-0 podman[97224]: 2026-02-01 14:51:55.068799504 +0000 UTC m=+0.043197389 container exec 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 01 14:51:55 compute-0 ceph-mds[95382]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Feb 01 14:51:55 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mds-cephfs-compute-0-agpbju[95378]: 2026-02-01T14:51:55.096+0000 7efeb15b5640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Feb 01 14:51:55 compute-0 sudo[97268]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxbvskimmzxxzikgikqbwdnlzqjjqepw ; /usr/bin/python3'
Feb 01 14:51:55 compute-0 sudo[97268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:55 compute-0 podman[97224]: 2026-02-01 14:51:55.209812649 +0000 UTC m=+0.184210494 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 01 14:51:55 compute-0 python3[97270]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:55 compute-0 podman[97305]: 2026-02-01 14:51:55.34181708 +0000 UTC m=+0.040858893 container create dc377c5158162e64cdc2ff8f73a89658850aca2ce7f887c0aaf0cc35e30e22cf (image=quay.io/ceph/ceph:v20, name=eager_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:55 compute-0 systemd[1]: Started libpod-conmon-dc377c5158162e64cdc2ff8f73a89658850aca2ce7f887c0aaf0cc35e30e22cf.scope.
Feb 01 14:51:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:51:55 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e8869a09d4f9ebbf153579aca9c55d69b20d5417084bd9de7fa3d09e74f4f9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e8869a09d4f9ebbf153579aca9c55d69b20d5417084bd9de7fa3d09e74f4f9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:55 compute-0 podman[97305]: 2026-02-01 14:51:55.319406868 +0000 UTC m=+0.018448671 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:55 compute-0 podman[97305]: 2026-02-01 14:51:55.426377464 +0000 UTC m=+0.125419317 container init dc377c5158162e64cdc2ff8f73a89658850aca2ce7f887c0aaf0cc35e30e22cf (image=quay.io/ceph/ceph:v20, name=eager_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 01 14:51:55 compute-0 podman[97305]: 2026-02-01 14:51:55.434196514 +0000 UTC m=+0.133238297 container start dc377c5158162e64cdc2ff8f73a89658850aca2ce7f887c0aaf0cc35e30e22cf (image=quay.io/ceph/ceph:v20, name=eager_fermi, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:55 compute-0 podman[97305]: 2026-02-01 14:51:55.437504068 +0000 UTC m=+0.136545941 container attach dc377c5158162e64cdc2ff8f73a89658850aca2ce7f887c0aaf0cc35e30e22cf (image=quay.io/ceph/ceph:v20, name=eager_fermi, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 01 14:51:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Feb 01 14:51:55 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb 01 14:51:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Feb 01 14:51:55 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Feb 01 14:51:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 37 pg[11.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:51:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Feb 01 14:51:55 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Feb 01 14:51:55 compute-0 ceph-mon[75179]: osdmap e36: 3 total, 3 up, 3 in
Feb 01 14:51:55 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Feb 01 14:51:55 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:55 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:55 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:55 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:55 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v77: 11 pgs: 1 creating+activating, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Feb 01 14:51:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 01 14:51:55 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1076259395' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 01 14:51:55 compute-0 eager_fermi[97337]: 
Feb 01 14:51:55 compute-0 eager_fermi[97337]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.eusbkm","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Feb 01 14:51:55 compute-0 systemd[1]: libpod-dc377c5158162e64cdc2ff8f73a89658850aca2ce7f887c0aaf0cc35e30e22cf.scope: Deactivated successfully.
Feb 01 14:51:55 compute-0 podman[97460]: 2026-02-01 14:51:55.87712204 +0000 UTC m=+0.027617279 container died dc377c5158162e64cdc2ff8f73a89658850aca2ce7f887c0aaf0cc35e30e22cf (image=quay.io/ceph/ceph:v20, name=eager_fermi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 01 14:51:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-63e8869a09d4f9ebbf153579aca9c55d69b20d5417084bd9de7fa3d09e74f4f9-merged.mount: Deactivated successfully.
Feb 01 14:51:55 compute-0 podman[97460]: 2026-02-01 14:51:55.912074895 +0000 UTC m=+0.062570134 container remove dc377c5158162e64cdc2ff8f73a89658850aca2ce7f887c0aaf0cc35e30e22cf (image=quay.io/ceph/ceph:v20, name=eager_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:51:55 compute-0 systemd[1]: libpod-conmon-dc377c5158162e64cdc2ff8f73a89658850aca2ce7f887c0aaf0cc35e30e22cf.scope: Deactivated successfully.
Feb 01 14:51:55 compute-0 sudo[97268]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:55 compute-0 sudo[97155]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:51:56 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:51:56 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:51:56 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 14:51:56 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:51:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 14:51:56 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 14:51:56 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:51:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 14:51:56 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:51:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:51:56 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:56 compute-0 sudo[97491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:56 compute-0 sudo[97491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:56 compute-0 sudo[97491]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:56 compute-0 sudo[97516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 14:51:56 compute-0 sudo[97516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:56 compute-0 podman[97553]: 2026-02-01 14:51:56.42909091 +0000 UTC m=+0.035714928 container create 2f5d09d59ddc2dcd768892aa9c008f8b944e0c10f83c63c27a3e2ca4e26404a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 01 14:51:56 compute-0 systemd[1]: Started libpod-conmon-2f5d09d59ddc2dcd768892aa9c008f8b944e0c10f83c63c27a3e2ca4e26404a2.scope.
Feb 01 14:51:56 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:56 compute-0 podman[97553]: 2026-02-01 14:51:56.497857928 +0000 UTC m=+0.104481976 container init 2f5d09d59ddc2dcd768892aa9c008f8b944e0c10f83c63c27a3e2ca4e26404a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 01 14:51:56 compute-0 podman[97553]: 2026-02-01 14:51:56.505957977 +0000 UTC m=+0.112581995 container start 2f5d09d59ddc2dcd768892aa9c008f8b944e0c10f83c63c27a3e2ca4e26404a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wing, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 01 14:51:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Feb 01 14:51:56 compute-0 nifty_wing[97570]: 167 167
Feb 01 14:51:56 compute-0 systemd[1]: libpod-2f5d09d59ddc2dcd768892aa9c008f8b944e0c10f83c63c27a3e2ca4e26404a2.scope: Deactivated successfully.
Feb 01 14:51:56 compute-0 podman[97553]: 2026-02-01 14:51:56.413708656 +0000 UTC m=+0.020332684 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:56 compute-0 podman[97553]: 2026-02-01 14:51:56.510091013 +0000 UTC m=+0.116715081 container attach 2f5d09d59ddc2dcd768892aa9c008f8b944e0c10f83c63c27a3e2ca4e26404a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 01 14:51:56 compute-0 podman[97553]: 2026-02-01 14:51:56.510407612 +0000 UTC m=+0.117031640 container died 2f5d09d59ddc2dcd768892aa9c008f8b944e0c10f83c63c27a3e2ca4e26404a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:51:56 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb 01 14:51:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Feb 01 14:51:56 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Feb 01 14:51:56 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb 01 14:51:56 compute-0 ceph-mon[75179]: osdmap e37: 3 total, 3 up, 3 in
Feb 01 14:51:56 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Feb 01 14:51:56 compute-0 ceph-mon[75179]: pgmap v77: 11 pgs: 1 creating+activating, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Feb 01 14:51:56 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1076259395' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 01 14:51:56 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:56 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:56 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:56 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:51:56 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:51:56 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:51:56 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:51:56 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:51:56 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2993494818' entity='client.rgw.rgw.compute-0.eusbkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb 01 14:51:56 compute-0 ceph-mon[75179]: osdmap e38: 3 total, 3 up, 3 in
Feb 01 14:51:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-797ef62bb2044f616973c6b80112a4a4c8a305c336907dbf4f031c4a2e995921-merged.mount: Deactivated successfully.
Feb 01 14:51:56 compute-0 podman[97553]: 2026-02-01 14:51:56.55040136 +0000 UTC m=+0.157025388 container remove 2f5d09d59ddc2dcd768892aa9c008f8b944e0c10f83c63c27a3e2ca4e26404a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wing, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:56 compute-0 systemd[1]: libpod-conmon-2f5d09d59ddc2dcd768892aa9c008f8b944e0c10f83c63c27a3e2ca4e26404a2.scope: Deactivated successfully.
Feb 01 14:51:56 compute-0 sudo[97611]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iokjsgkqdgmmmqomxkvwfquivaoecqyn ; /usr/bin/python3'
Feb 01 14:51:56 compute-0 sudo[97611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:56 compute-0 podman[97637]: 2026-02-01 14:51:56.695367645 +0000 UTC m=+0.035783559 container create aa0590e83d15d8aeb5019f249d262689d55409d40678b8ba3f58ba856093bec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:56 compute-0 python3[97613]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:56 compute-0 systemd[1]: Started libpod-conmon-aa0590e83d15d8aeb5019f249d262689d55409d40678b8ba3f58ba856093bec4.scope.
Feb 01 14:51:56 compute-0 podman[97637]: 2026-02-01 14:51:56.67780748 +0000 UTC m=+0.018223424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:56 compute-0 podman[97651]: 2026-02-01 14:51:56.780726031 +0000 UTC m=+0.062003219 container create 597e7291db05c297a2a90809d89ab465755f3a14ce86f0e91efb4ba24efc36ca (image=quay.io/ceph/ceph:v20, name=angry_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:56 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aacec4825e2d48d4908a2afefa464d68b2869aba27332fc2d5cf7fdbc77fb128/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aacec4825e2d48d4908a2afefa464d68b2869aba27332fc2d5cf7fdbc77fb128/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aacec4825e2d48d4908a2afefa464d68b2869aba27332fc2d5cf7fdbc77fb128/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aacec4825e2d48d4908a2afefa464d68b2869aba27332fc2d5cf7fdbc77fb128/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aacec4825e2d48d4908a2afefa464d68b2869aba27332fc2d5cf7fdbc77fb128/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:56 compute-0 radosgw[94941]: v1 topic migration: starting v1 topic migration..
Feb 01 14:51:56 compute-0 radosgw[94941]: v1 topic migration: finished v1 topic migration
Feb 01 14:51:56 compute-0 podman[97637]: 2026-02-01 14:51:56.822658553 +0000 UTC m=+0.163074477 container init aa0590e83d15d8aeb5019f249d262689d55409d40678b8ba3f58ba856093bec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:51:56 compute-0 podman[97637]: 2026-02-01 14:51:56.83071525 +0000 UTC m=+0.171131164 container start aa0590e83d15d8aeb5019f249d262689d55409d40678b8ba3f58ba856093bec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:56 compute-0 podman[97637]: 2026-02-01 14:51:56.835637179 +0000 UTC m=+0.176053263 container attach aa0590e83d15d8aeb5019f249d262689d55409d40678b8ba3f58ba856093bec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:56 compute-0 radosgw[94941]: framework: beast
Feb 01 14:51:56 compute-0 radosgw[94941]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Feb 01 14:51:56 compute-0 radosgw[94941]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Feb 01 14:51:56 compute-0 systemd[1]: Started libpod-conmon-597e7291db05c297a2a90809d89ab465755f3a14ce86f0e91efb4ba24efc36ca.scope.
Feb 01 14:51:56 compute-0 podman[97651]: 2026-02-01 14:51:56.747501925 +0000 UTC m=+0.028779203 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:56 compute-0 radosgw[94941]: starting handler: beast
Feb 01 14:51:56 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35db5d754cc527248f29882974b2f706affb36c525aa0338a66c729e554fb9c4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35db5d754cc527248f29882974b2f706affb36c525aa0338a66c729e554fb9c4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:56 compute-0 radosgw[94941]: set uid:gid to 167:167 (ceph:ceph)
Feb 01 14:51:56 compute-0 radosgw[94941]: mgrc service_daemon_register rgw.14256 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.eusbkm,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864300,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=519a3cee-c587-4379-95a1-5c7fa227c87c,zone_name=default,zonegroup_id=8a86e5a8-eaaa-443e-b262-61c80d35fad5,zonegroup_name=default}
Feb 01 14:51:56 compute-0 podman[97651]: 2026-02-01 14:51:56.894158429 +0000 UTC m=+0.175435657 container init 597e7291db05c297a2a90809d89ab465755f3a14ce86f0e91efb4ba24efc36ca (image=quay.io/ceph/ceph:v20, name=angry_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 01 14:51:56 compute-0 podman[97651]: 2026-02-01 14:51:56.899142149 +0000 UTC m=+0.180419357 container start 597e7291db05c297a2a90809d89ab465755f3a14ce86f0e91efb4ba24efc36ca (image=quay.io/ceph/ceph:v20, name=angry_tu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:56 compute-0 podman[97651]: 2026-02-01 14:51:56.903993726 +0000 UTC m=+0.185270924 container attach 597e7291db05c297a2a90809d89ab465755f3a14ce86f0e91efb4ba24efc36ca (image=quay.io/ceph/ceph:v20, name=angry_tu, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:57 compute-0 eloquent_diffie[97666]: --> passed data devices: 0 physical, 3 LVM
Feb 01 14:51:57 compute-0 eloquent_diffie[97666]: --> All data devices are unavailable
Feb 01 14:51:57 compute-0 podman[97637]: 2026-02-01 14:51:57.315089515 +0000 UTC m=+0.655505449 container died aa0590e83d15d8aeb5019f249d262689d55409d40678b8ba3f58ba856093bec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 01 14:51:57 compute-0 systemd[1]: libpod-aa0590e83d15d8aeb5019f249d262689d55409d40678b8ba3f58ba856093bec4.scope: Deactivated successfully.
Feb 01 14:51:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-aacec4825e2d48d4908a2afefa464d68b2869aba27332fc2d5cf7fdbc77fb128-merged.mount: Deactivated successfully.
Feb 01 14:51:57 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Feb 01 14:51:57 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/891653748' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Feb 01 14:51:57 compute-0 angry_tu[97690]: mimic
Feb 01 14:51:57 compute-0 podman[97637]: 2026-02-01 14:51:57.376222898 +0000 UTC m=+0.716638832 container remove aa0590e83d15d8aeb5019f249d262689d55409d40678b8ba3f58ba856093bec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_diffie, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 01 14:51:57 compute-0 systemd[1]: libpod-597e7291db05c297a2a90809d89ab465755f3a14ce86f0e91efb4ba24efc36ca.scope: Deactivated successfully.
Feb 01 14:51:57 compute-0 podman[97651]: 2026-02-01 14:51:57.380584191 +0000 UTC m=+0.661861379 container died 597e7291db05c297a2a90809d89ab465755f3a14ce86f0e91efb4ba24efc36ca (image=quay.io/ceph/ceph:v20, name=angry_tu, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 01 14:51:57 compute-0 systemd[1]: libpod-conmon-aa0590e83d15d8aeb5019f249d262689d55409d40678b8ba3f58ba856093bec4.scope: Deactivated successfully.
Feb 01 14:51:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-35db5d754cc527248f29882974b2f706affb36c525aa0338a66c729e554fb9c4-merged.mount: Deactivated successfully.
Feb 01 14:51:57 compute-0 podman[97651]: 2026-02-01 14:51:57.423273404 +0000 UTC m=+0.704550592 container remove 597e7291db05c297a2a90809d89ab465755f3a14ce86f0e91efb4ba24efc36ca (image=quay.io/ceph/ceph:v20, name=angry_tu, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 01 14:51:57 compute-0 systemd[1]: libpod-conmon-597e7291db05c297a2a90809d89ab465755f3a14ce86f0e91efb4ba24efc36ca.scope: Deactivated successfully.
Feb 01 14:51:57 compute-0 sudo[97516]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:57 compute-0 sudo[97611]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:57 compute-0 sudo[97756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:57 compute-0 sudo[97756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:57 compute-0 sudo[97756]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:57 compute-0 sudo[97781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 14:51:57 compute-0 sudo[97781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:57 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/891653748' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Feb 01 14:51:57 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v79: 11 pgs: 1 creating+activating, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 241 B/s rd, 483 B/s wr, 1 op/s
Feb 01 14:51:57 compute-0 podman[97819]: 2026-02-01 14:51:57.831744719 +0000 UTC m=+0.060835146 container create a7f4815c27e2a7ff3245709af41a242e0ab9c96b56847a0a50c73bb812d2d3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tu, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 01 14:51:57 compute-0 systemd[1]: Started libpod-conmon-a7f4815c27e2a7ff3245709af41a242e0ab9c96b56847a0a50c73bb812d2d3a6.scope.
Feb 01 14:51:57 compute-0 podman[97819]: 2026-02-01 14:51:57.807279779 +0000 UTC m=+0.036370256 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:57 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:57 compute-0 podman[97819]: 2026-02-01 14:51:57.924116543 +0000 UTC m=+0.153207020 container init a7f4815c27e2a7ff3245709af41a242e0ab9c96b56847a0a50c73bb812d2d3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Feb 01 14:51:57 compute-0 podman[97819]: 2026-02-01 14:51:57.9328944 +0000 UTC m=+0.161984827 container start a7f4815c27e2a7ff3245709af41a242e0ab9c96b56847a0a50c73bb812d2d3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tu, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 01 14:51:57 compute-0 podman[97819]: 2026-02-01 14:51:57.936726568 +0000 UTC m=+0.165817065 container attach a7f4815c27e2a7ff3245709af41a242e0ab9c96b56847a0a50c73bb812d2d3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:51:57 compute-0 jolly_tu[97835]: 167 167
Feb 01 14:51:57 compute-0 systemd[1]: libpod-a7f4815c27e2a7ff3245709af41a242e0ab9c96b56847a0a50c73bb812d2d3a6.scope: Deactivated successfully.
Feb 01 14:51:57 compute-0 conmon[97835]: conmon a7f4815c27e2a7ff3245 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a7f4815c27e2a7ff3245709af41a242e0ab9c96b56847a0a50c73bb812d2d3a6.scope/container/memory.events
Feb 01 14:51:57 compute-0 podman[97819]: 2026-02-01 14:51:57.941113522 +0000 UTC m=+0.170203959 container died a7f4815c27e2a7ff3245709af41a242e0ab9c96b56847a0a50c73bb812d2d3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tu, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 01 14:51:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-696822be9df93c20b0d56457f9abc5962dcd4f926436ea595a8726681f269fbe-merged.mount: Deactivated successfully.
Feb 01 14:51:57 compute-0 podman[97819]: 2026-02-01 14:51:57.985967166 +0000 UTC m=+0.215057593 container remove a7f4815c27e2a7ff3245709af41a242e0ab9c96b56847a0a50c73bb812d2d3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 01 14:51:57 compute-0 systemd[1]: libpod-conmon-a7f4815c27e2a7ff3245709af41a242e0ab9c96b56847a0a50c73bb812d2d3a6.scope: Deactivated successfully.
Feb 01 14:51:58 compute-0 sudo[97879]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgxivrpvbizhvklzblugnlwotzlwcmdf ; /usr/bin/python3'
Feb 01 14:51:58 compute-0 sudo[97879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:51:58 compute-0 podman[97885]: 2026-02-01 14:51:58.124200103 +0000 UTC m=+0.047846450 container create db6ae76f06f5a0f8bf2b802db6df2359c41358174765fe1d4d80b16ae541af92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_albattani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 01 14:51:58 compute-0 systemd[1]: Started libpod-conmon-db6ae76f06f5a0f8bf2b802db6df2359c41358174765fe1d4d80b16ae541af92.scope.
Feb 01 14:51:58 compute-0 python3[97886]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:51:58 compute-0 podman[97885]: 2026-02-01 14:51:58.099741284 +0000 UTC m=+0.023387681 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:58 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b262f6a610bd7671bf4c6df0462d2369e57ae36ed77d017e6e21ba4c97462504/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b262f6a610bd7671bf4c6df0462d2369e57ae36ed77d017e6e21ba4c97462504/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b262f6a610bd7671bf4c6df0462d2369e57ae36ed77d017e6e21ba4c97462504/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b262f6a610bd7671bf4c6df0462d2369e57ae36ed77d017e6e21ba4c97462504/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:58 compute-0 podman[97885]: 2026-02-01 14:51:58.229716908 +0000 UTC m=+0.153363265 container init db6ae76f06f5a0f8bf2b802db6df2359c41358174765fe1d4d80b16ae541af92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_albattani, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:51:58 compute-0 podman[97885]: 2026-02-01 14:51:58.238922847 +0000 UTC m=+0.162569194 container start db6ae76f06f5a0f8bf2b802db6df2359c41358174765fe1d4d80b16ae541af92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_albattani, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:58 compute-0 podman[97885]: 2026-02-01 14:51:58.243550358 +0000 UTC m=+0.167196765 container attach db6ae76f06f5a0f8bf2b802db6df2359c41358174765fe1d4d80b16ae541af92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_albattani, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 01 14:51:58 compute-0 podman[97905]: 2026-02-01 14:51:58.279415489 +0000 UTC m=+0.066031543 container create 574b78b7b90f83cad4fcf25d03ea4b01ed8324b9fad1b6cbcd574b746af2e83f (image=quay.io/ceph/ceph:v20, name=eloquent_gates, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 01 14:51:58 compute-0 systemd[1]: Started libpod-conmon-574b78b7b90f83cad4fcf25d03ea4b01ed8324b9fad1b6cbcd574b746af2e83f.scope.
Feb 01 14:51:58 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:58 compute-0 podman[97905]: 2026-02-01 14:51:58.254814155 +0000 UTC m=+0.041430269 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1d67b88f99e7c446e9549ffdba1fb3860d9a238a8f354b5e4bf8069c1afc58b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1d67b88f99e7c446e9549ffdba1fb3860d9a238a8f354b5e4bf8069c1afc58b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:58 compute-0 podman[97905]: 2026-02-01 14:51:58.365466464 +0000 UTC m=+0.152082578 container init 574b78b7b90f83cad4fcf25d03ea4b01ed8324b9fad1b6cbcd574b746af2e83f (image=quay.io/ceph/ceph:v20, name=eloquent_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 01 14:51:58 compute-0 podman[97905]: 2026-02-01 14:51:58.36923016 +0000 UTC m=+0.155846204 container start 574b78b7b90f83cad4fcf25d03ea4b01ed8324b9fad1b6cbcd574b746af2e83f (image=quay.io/ceph/ceph:v20, name=eloquent_gates, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 01 14:51:58 compute-0 podman[97905]: 2026-02-01 14:51:58.372362699 +0000 UTC m=+0.158978823 container attach 574b78b7b90f83cad4fcf25d03ea4b01ed8324b9fad1b6cbcd574b746af2e83f (image=quay.io/ceph/ceph:v20, name=eloquent_gates, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 01 14:51:58 compute-0 eager_albattani[97902]: {
Feb 01 14:51:58 compute-0 eager_albattani[97902]:     "0": [
Feb 01 14:51:58 compute-0 eager_albattani[97902]:         {
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "devices": [
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "/dev/loop3"
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             ],
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "lv_name": "ceph_lv0",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "lv_size": "21470642176",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "name": "ceph_lv0",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "tags": {
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.cluster_name": "ceph",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.crush_device_class": "",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.encrypted": "0",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.objectstore": "bluestore",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.osd_id": "0",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.type": "block",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.vdo": "0",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.with_tpm": "0"
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             },
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "type": "block",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "vg_name": "ceph_vg0"
Feb 01 14:51:58 compute-0 eager_albattani[97902]:         }
Feb 01 14:51:58 compute-0 eager_albattani[97902]:     ],
Feb 01 14:51:58 compute-0 eager_albattani[97902]:     "1": [
Feb 01 14:51:58 compute-0 eager_albattani[97902]:         {
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "devices": [
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "/dev/loop4"
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             ],
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "lv_name": "ceph_lv1",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "lv_size": "21470642176",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "name": "ceph_lv1",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "tags": {
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.cluster_name": "ceph",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.crush_device_class": "",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.encrypted": "0",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.objectstore": "bluestore",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.osd_id": "1",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.type": "block",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.vdo": "0",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.with_tpm": "0"
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             },
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "type": "block",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "vg_name": "ceph_vg1"
Feb 01 14:51:58 compute-0 eager_albattani[97902]:         }
Feb 01 14:51:58 compute-0 eager_albattani[97902]:     ],
Feb 01 14:51:58 compute-0 eager_albattani[97902]:     "2": [
Feb 01 14:51:58 compute-0 eager_albattani[97902]:         {
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "devices": [
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "/dev/loop5"
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             ],
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "lv_name": "ceph_lv2",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "lv_size": "21470642176",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "name": "ceph_lv2",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "tags": {
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.cluster_name": "ceph",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.crush_device_class": "",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.encrypted": "0",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.objectstore": "bluestore",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.osd_id": "2",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.type": "block",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.vdo": "0",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:                 "ceph.with_tpm": "0"
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             },
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "type": "block",
Feb 01 14:51:58 compute-0 eager_albattani[97902]:             "vg_name": "ceph_vg2"
Feb 01 14:51:58 compute-0 eager_albattani[97902]:         }
Feb 01 14:51:58 compute-0 eager_albattani[97902]:     ]
Feb 01 14:51:58 compute-0 eager_albattani[97902]: }
Feb 01 14:51:58 compute-0 systemd[1]: libpod-db6ae76f06f5a0f8bf2b802db6df2359c41358174765fe1d4d80b16ae541af92.scope: Deactivated successfully.
Feb 01 14:51:58 compute-0 podman[97885]: 2026-02-01 14:51:58.524271871 +0000 UTC m=+0.447918218 container died db6ae76f06f5a0f8bf2b802db6df2359c41358174765fe1d4d80b16ae541af92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_albattani, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:51:58 compute-0 ceph-mon[75179]: pgmap v79: 11 pgs: 1 creating+activating, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 241 B/s rd, 483 B/s wr, 1 op/s
Feb 01 14:51:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b262f6a610bd7671bf4c6df0462d2369e57ae36ed77d017e6e21ba4c97462504-merged.mount: Deactivated successfully.
Feb 01 14:51:58 compute-0 podman[97885]: 2026-02-01 14:51:58.571967485 +0000 UTC m=+0.495613832 container remove db6ae76f06f5a0f8bf2b802db6df2359c41358174765fe1d4d80b16ae541af92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_albattani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 01 14:51:58 compute-0 systemd[1]: libpod-conmon-db6ae76f06f5a0f8bf2b802db6df2359c41358174765fe1d4d80b16ae541af92.scope: Deactivated successfully.
Feb 01 14:51:58 compute-0 sudo[97781]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:58 compute-0 sudo[97961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:51:58 compute-0 sudo[97961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:58 compute-0 sudo[97961]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:58 compute-0 sudo[97986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 14:51:58 compute-0 sudo[97986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:51:58 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Feb 01 14:51:58 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/241932449' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Feb 01 14:51:58 compute-0 eloquent_gates[97922]: 
Feb 01 14:51:58 compute-0 eloquent_gates[97922]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Feb 01 14:51:58 compute-0 systemd[1]: libpod-574b78b7b90f83cad4fcf25d03ea4b01ed8324b9fad1b6cbcd574b746af2e83f.scope: Deactivated successfully.
Feb 01 14:51:58 compute-0 podman[97905]: 2026-02-01 14:51:58.910668783 +0000 UTC m=+0.697284797 container died 574b78b7b90f83cad4fcf25d03ea4b01ed8324b9fad1b6cbcd574b746af2e83f (image=quay.io/ceph/ceph:v20, name=eloquent_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:51:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1d67b88f99e7c446e9549ffdba1fb3860d9a238a8f354b5e4bf8069c1afc58b-merged.mount: Deactivated successfully.
Feb 01 14:51:58 compute-0 podman[97905]: 2026-02-01 14:51:58.949410825 +0000 UTC m=+0.736026879 container remove 574b78b7b90f83cad4fcf25d03ea4b01ed8324b9fad1b6cbcd574b746af2e83f (image=quay.io/ceph/ceph:v20, name=eloquent_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:51:58 compute-0 systemd[1]: libpod-conmon-574b78b7b90f83cad4fcf25d03ea4b01ed8324b9fad1b6cbcd574b746af2e83f.scope: Deactivated successfully.
Feb 01 14:51:58 compute-0 sudo[97879]: pam_unix(sudo:session): session closed for user root
Feb 01 14:51:59 compute-0 podman[98036]: 2026-02-01 14:51:59.031419487 +0000 UTC m=+0.050684810 container create 53f8357055ef5ddb27888e86a3ddf0da22c976fbf0062fc0163511bb942ea23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dijkstra, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:51:59 compute-0 systemd[1]: Started libpod-conmon-53f8357055ef5ddb27888e86a3ddf0da22c976fbf0062fc0163511bb942ea23d.scope.
Feb 01 14:51:59 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:59 compute-0 podman[98036]: 2026-02-01 14:51:59.013714028 +0000 UTC m=+0.032979311 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:59 compute-0 podman[98036]: 2026-02-01 14:51:59.117528885 +0000 UTC m=+0.136794268 container init 53f8357055ef5ddb27888e86a3ddf0da22c976fbf0062fc0163511bb942ea23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 01 14:51:59 compute-0 podman[98036]: 2026-02-01 14:51:59.125772897 +0000 UTC m=+0.145038220 container start 53f8357055ef5ddb27888e86a3ddf0da22c976fbf0062fc0163511bb942ea23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dijkstra, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 01 14:51:59 compute-0 podman[98036]: 2026-02-01 14:51:59.129246575 +0000 UTC m=+0.148511948 container attach 53f8357055ef5ddb27888e86a3ddf0da22c976fbf0062fc0163511bb942ea23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dijkstra, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:51:59 compute-0 serene_dijkstra[98052]: 167 167
Feb 01 14:51:59 compute-0 systemd[1]: libpod-53f8357055ef5ddb27888e86a3ddf0da22c976fbf0062fc0163511bb942ea23d.scope: Deactivated successfully.
Feb 01 14:51:59 compute-0 podman[98036]: 2026-02-01 14:51:59.133367391 +0000 UTC m=+0.152632704 container died 53f8357055ef5ddb27888e86a3ddf0da22c976fbf0062fc0163511bb942ea23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dijkstra, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 01 14:51:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad6abee841fa0d5701f0d14300cf32e8f3656ea4c005d91941e44dc156aae4d2-merged.mount: Deactivated successfully.
Feb 01 14:51:59 compute-0 podman[98036]: 2026-02-01 14:51:59.179732688 +0000 UTC m=+0.198998001 container remove 53f8357055ef5ddb27888e86a3ddf0da22c976fbf0062fc0163511bb942ea23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dijkstra, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 01 14:51:59 compute-0 systemd[1]: libpod-conmon-53f8357055ef5ddb27888e86a3ddf0da22c976fbf0062fc0163511bb942ea23d.scope: Deactivated successfully.
Feb 01 14:51:59 compute-0 podman[98076]: 2026-02-01 14:51:59.354112604 +0000 UTC m=+0.057519983 container create 25f7c8e59eed55485a90db37f0f5029fe4f9316424d57afaafc1fae78c7faa1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 01 14:51:59 compute-0 systemd[1]: Started libpod-conmon-25f7c8e59eed55485a90db37f0f5029fe4f9316424d57afaafc1fae78c7faa1e.scope.
Feb 01 14:51:59 compute-0 podman[98076]: 2026-02-01 14:51:59.333135112 +0000 UTC m=+0.036542561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:51:59 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8427402bf546bb2f84d3a32280e2ec6780a4975923a509911230211ec472d3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8427402bf546bb2f84d3a32280e2ec6780a4975923a509911230211ec472d3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8427402bf546bb2f84d3a32280e2ec6780a4975923a509911230211ec472d3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8427402bf546bb2f84d3a32280e2ec6780a4975923a509911230211ec472d3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:51:59 compute-0 podman[98076]: 2026-02-01 14:51:59.462054147 +0000 UTC m=+0.165461556 container init 25f7c8e59eed55485a90db37f0f5029fe4f9316424d57afaafc1fae78c7faa1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 01 14:51:59 compute-0 podman[98076]: 2026-02-01 14:51:59.473733276 +0000 UTC m=+0.177140655 container start 25f7c8e59eed55485a90db37f0f5029fe4f9316424d57afaafc1fae78c7faa1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 01 14:51:59 compute-0 podman[98076]: 2026-02-01 14:51:59.477253475 +0000 UTC m=+0.180660854 container attach 25f7c8e59eed55485a90db37f0f5029fe4f9316424d57afaafc1fae78c7faa1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 01 14:51:59 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/241932449' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Feb 01 14:51:59 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v80: 11 pgs: 1 creating+activating, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 341 B/s wr, 1 op/s
Feb 01 14:52:00 compute-0 lvm[98170]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 14:52:00 compute-0 lvm[98170]: VG ceph_vg1 finished
Feb 01 14:52:00 compute-0 lvm[98169]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 14:52:00 compute-0 lvm[98169]: VG ceph_vg0 finished
Feb 01 14:52:00 compute-0 lvm[98172]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 14:52:00 compute-0 lvm[98172]: VG ceph_vg2 finished
Feb 01 14:52:00 compute-0 boring_chandrasekhar[98091]: {}
Feb 01 14:52:00 compute-0 systemd[1]: libpod-25f7c8e59eed55485a90db37f0f5029fe4f9316424d57afaafc1fae78c7faa1e.scope: Deactivated successfully.
Feb 01 14:52:00 compute-0 systemd[1]: libpod-25f7c8e59eed55485a90db37f0f5029fe4f9316424d57afaafc1fae78c7faa1e.scope: Consumed 1.077s CPU time.
Feb 01 14:52:00 compute-0 podman[98076]: 2026-02-01 14:52:00.169982283 +0000 UTC m=+0.873389652 container died 25f7c8e59eed55485a90db37f0f5029fe4f9316424d57afaafc1fae78c7faa1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 01 14:52:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8427402bf546bb2f84d3a32280e2ec6780a4975923a509911230211ec472d3a-merged.mount: Deactivated successfully.
Feb 01 14:52:00 compute-0 podman[98076]: 2026-02-01 14:52:00.205902845 +0000 UTC m=+0.909310214 container remove 25f7c8e59eed55485a90db37f0f5029fe4f9316424d57afaafc1fae78c7faa1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 01 14:52:00 compute-0 systemd[1]: libpod-conmon-25f7c8e59eed55485a90db37f0f5029fe4f9316424d57afaafc1fae78c7faa1e.scope: Deactivated successfully.
Feb 01 14:52:00 compute-0 sudo[97986]: pam_unix(sudo:session): session closed for user root
Feb 01 14:52:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:52:00 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:52:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:52:00 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:52:00 compute-0 sudo[98188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 14:52:00 compute-0 sudo[98188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:52:00 compute-0 sudo[98188]: pam_unix(sudo:session): session closed for user root
Feb 01 14:52:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:52:00 compute-0 ceph-mon[75179]: pgmap v80: 11 pgs: 1 creating+activating, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 341 B/s wr, 1 op/s
Feb 01 14:52:00 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:52:00 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:52:01 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v81: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 9.2 KiB/s wr, 197 op/s
Feb 01 14:52:02 compute-0 ceph-mon[75179]: pgmap v81: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 9.2 KiB/s wr, 197 op/s
Feb 01 14:52:03 compute-0 ceph-mgr[75469]: [progress INFO root] Completed event 2e17c372-c1ad-48d6-8bf0-bbf5585c23cf (Global Recovery Event) in 15 seconds
Feb 01 14:52:03 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v82: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 8.0 KiB/s wr, 173 op/s
Feb 01 14:52:04 compute-0 ceph-mon[75179]: pgmap v82: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 8.0 KiB/s wr, 173 op/s
Feb 01 14:52:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:52:05 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v83: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 6.4 KiB/s wr, 141 op/s
Feb 01 14:52:06 compute-0 ceph-mon[75179]: pgmap v83: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 6.4 KiB/s wr, 141 op/s
Feb 01 14:52:07 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v84: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 5.7 KiB/s wr, 126 op/s
Feb 01 14:52:08 compute-0 ceph-mgr[75469]: [progress INFO root] Writing back 6 completed events
Feb 01 14:52:08 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 01 14:52:08 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:52:08 compute-0 ceph-mon[75179]: pgmap v84: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 5.7 KiB/s wr, 126 op/s
Feb 01 14:52:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:52:09 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v85: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Feb 01 14:52:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:52:10 compute-0 ceph-mon[75179]: pgmap v85: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Feb 01 14:52:11 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v86: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Feb 01 14:52:12 compute-0 ceph-mon[75179]: pgmap v86: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Feb 01 14:52:13 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v87: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:14 compute-0 ceph-mon[75179]: pgmap v87: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:52:15 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v88: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:16 compute-0 ceph-mon[75179]: pgmap v88: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_14:52:17
Feb 01 14:52:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 14:52:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 14:52:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'images', 'backups', 'volumes', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta']
Feb 01 14:52:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 14:52:17 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v89: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.243253721423607e-07 of space, bias 4.0, pg target 0.0007491904465708329 quantized to 16 (current 1)
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 14:52:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Feb 01 14:52:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 14:52:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Feb 01 14:52:18 compute-0 ceph-mon[75179]: pgmap v89: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:18 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Feb 01 14:52:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Feb 01 14:52:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Feb 01 14:52:18 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Feb 01 14:52:18 compute-0 ceph-mgr[75469]: [progress INFO root] update: starting ev 5f865ac9-5821-461d-bf71-3fd7b8b7d9e9 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Feb 01 14:52:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Feb 01 14:52:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Feb 01 14:52:19 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v91: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Feb 01 14:52:19 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 01 14:52:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Feb 01 14:52:19 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Feb 01 14:52:19 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Feb 01 14:52:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Feb 01 14:52:19 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Feb 01 14:52:19 compute-0 ceph-mgr[75469]: [progress INFO root] update: starting ev cd02d19a-bf29-4c1f-aab0-1f16f44d0f44 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Feb 01 14:52:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Feb 01 14:52:19 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Feb 01 14:52:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Feb 01 14:52:19 compute-0 ceph-mon[75179]: osdmap e39: 3 total, 3 up, 3 in
Feb 01 14:52:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Feb 01 14:52:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 01 14:52:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:52:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Feb 01 14:52:20 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Feb 01 14:52:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Feb 01 14:52:20 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Feb 01 14:52:20 compute-0 ceph-mgr[75469]: [progress INFO root] update: starting ev f0be4e48-5081-43b2-a261-e596203beb2b (PG autoscaler increasing pool 4 PGs from 1 to 32)
Feb 01 14:52:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Feb 01 14:52:20 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Feb 01 14:52:20 compute-0 ceph-mon[75179]: pgmap v91: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:20 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Feb 01 14:52:20 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Feb 01 14:52:20 compute-0 ceph-mon[75179]: osdmap e40: 3 total, 3 up, 3 in
Feb 01 14:52:20 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Feb 01 14:52:20 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Feb 01 14:52:20 compute-0 ceph-mon[75179]: osdmap e41: 3 total, 3 up, 3 in
Feb 01 14:52:21 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v94: 42 pgs: 31 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Feb 01 14:52:21 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 01 14:52:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Feb 01 14:52:21 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 01 14:52:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Feb 01 14:52:21 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Feb 01 14:52:21 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Feb 01 14:52:21 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Feb 01 14:52:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Feb 01 14:52:21 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Feb 01 14:52:21 compute-0 ceph-mgr[75469]: [progress INFO root] update: starting ev 9d856795-73d8-4b3a-a173-83651471199a (PG autoscaler increasing pool 5 PGs from 1 to 32)
Feb 01 14:52:21 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 42 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=42 pruub=12.305476189s) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active pruub 92.813713074s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Feb 01 14:52:21 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Feb 01 14:52:21 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 42 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=42 pruub=12.305476189s) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown pruub 92.813713074s@ mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:21 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Feb 01 14:52:21 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 01 14:52:21 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 01 14:52:21 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Feb 01 14:52:21 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Feb 01 14:52:21 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Feb 01 14:52:21 compute-0 ceph-mon[75179]: osdmap e42: 3 total, 3 up, 3 in
Feb 01 14:52:21 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 40 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=40 pruub=9.358250618s) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active pruub 83.583534241s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 42 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=42 pruub=10.378397942s) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active pruub 88.302650452s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 42 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=42 pruub=10.378397942s) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown pruub 88.302650452s@ mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=40 pruub=9.358250618s) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown pruub 83.583534241s@ mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.1e( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.c( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.e( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.10( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.12( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.14( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.1a( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.1( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Feb 01 14:52:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Feb 01 14:52:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Feb 01 14:52:22 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Feb 01 14:52:22 compute-0 ceph-mgr[75469]: [progress INFO root] update: starting ev fd83c393-8d35-4899-98de-8e27e64bea40 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Feb 01 14:52:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Feb 01 14:52:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1f( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1e( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1d( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.b( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.6( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.19( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.3( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1c( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1a( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.19( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.c( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.b( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.4( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.d( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.2( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.10( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.13( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.14( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.15( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.17( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1f( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.16( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.1e( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.1( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.0( empty local-lis/les=40/43 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.e( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.c( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.12( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.b( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.10( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.14( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.6( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1d( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.1a( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.19( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.3( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.1e( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1a( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.19( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1c( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.b( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.0( empty local-lis/les=42/43 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.d( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.4( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.10( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.2( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.13( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.14( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=42/43 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.c( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.15( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.17( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.16( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:22 compute-0 ceph-mon[75179]: pgmap v94: 42 pgs: 31 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:22 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Feb 01 14:52:22 compute-0 ceph-mon[75179]: osdmap e43: 3 total, 3 up, 3 in
Feb 01 14:52:22 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Feb 01 14:52:23 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Feb 01 14:52:23 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Feb 01 14:52:23 compute-0 ceph-mgr[75469]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Feb 01 14:52:23 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v97: 104 pgs: 93 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:23 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Feb 01 14:52:23 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Feb 01 14:52:23 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Feb 01 14:52:23 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 01 14:52:23 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Feb 01 14:52:23 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Feb 01 14:52:23 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Feb 01 14:52:23 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Feb 01 14:52:23 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Feb 01 14:52:23 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Feb 01 14:52:23 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 44 pg[6.0( v 32'39 (0'0,32'39] local-lis/les=21/22 n=22 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=44 pruub=12.310206413s) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 31'38 mlcod 31'38 active pruub 94.830680847s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:23 compute-0 ceph-mgr[75469]: [progress INFO root] update: starting ev 5770ec13-3dda-4253-ab1e-ee301548257c (PG autoscaler increasing pool 7 PGs from 1 to 32)
Feb 01 14:52:23 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Feb 01 14:52:23 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Feb 01 14:52:23 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 44 pg[6.0( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=44 pruub=12.310206413s) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 31'38 mlcod 0'0 unknown pruub 94.830680847s@ mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:23 compute-0 ceph-mon[75179]: 2.1c scrub starts
Feb 01 14:52:23 compute-0 ceph-mon[75179]: 2.1c scrub ok
Feb 01 14:52:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Feb 01 14:52:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 01 14:52:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Feb 01 14:52:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Feb 01 14:52:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Feb 01 14:52:23 compute-0 ceph-mon[75179]: osdmap e44: 3 total, 3 up, 3 in
Feb 01 14:52:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 44 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=44 pruub=11.096881866s) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active pruub 86.617851257s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 44 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=44 pruub=11.096881866s) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown pruub 86.617851257s@ mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Feb 01 14:52:24 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Feb 01 14:52:24 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Feb 01 14:52:24 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Feb 01 14:52:24 compute-0 ceph-mgr[75469]: [progress INFO root] update: starting ev ee10f24e-a116-4aee-ae4a-5595d10d2b8e (PG autoscaler increasing pool 8 PGs from 1 to 32)
Feb 01 14:52:24 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Feb 01 14:52:24 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1f( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.10( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.17( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.8( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.8( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.4( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.a( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=21/22 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.2( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.e( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.c( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.b( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.6( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=21/22 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.e( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1c( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.d( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1b( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1f( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.0( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 31'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.10( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.17( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.8( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 45 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=21/21 les/c/f=22/22/0 sis=44) [0] r=0 lpr=44 pi=[21,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.a( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.0( empty local-lis/les=44/45 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.6( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.e( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.b( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1c( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.d( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 45 pg[5.1b( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:24 compute-0 ceph-mon[75179]: pgmap v97: 104 pgs: 93 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:24 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Feb 01 14:52:24 compute-0 ceph-mon[75179]: osdmap e45: 3 total, 3 up, 3 in
Feb 01 14:52:24 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Feb 01 14:52:25 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Feb 01 14:52:25 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Feb 01 14:52:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:52:25 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v100: 150 pgs: 46 unknown, 104 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Feb 01 14:52:25 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 01 14:52:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Feb 01 14:52:25 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 01 14:52:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Feb 01 14:52:25 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Feb 01 14:52:25 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Feb 01 14:52:25 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Feb 01 14:52:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Feb 01 14:52:25 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Feb 01 14:52:25 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 46 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=46 pruub=11.298465729s) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active pruub 92.332054138s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:25 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 46 pg[8.0( v 31'6 (0'0,31'6] local-lis/les=30/31 n=6 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=46 pruub=11.514460564s) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 31'5 mlcod 31'5 active pruub 92.548133850s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:25 compute-0 ceph-mgr[75469]: [progress INFO root] update: starting ev 27d05e9a-17d4-4f6a-8d65-1d5c2a8f17c3 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Feb 01 14:52:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Feb 01 14:52:25 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Feb 01 14:52:25 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 46 pg[8.0( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=46 pruub=11.514460564s) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 31'5 mlcod 0'0 unknown pruub 92.548133850s@ mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:25 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 46 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=46 pruub=11.298465729s) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown pruub 92.332054138s@ mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:25 compute-0 ceph-mon[75179]: 3.1f scrub starts
Feb 01 14:52:25 compute-0 ceph-mon[75179]: 3.1f scrub ok
Feb 01 14:52:25 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 01 14:52:25 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 01 14:52:25 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Feb 01 14:52:25 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Feb 01 14:52:25 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Feb 01 14:52:25 compute-0 ceph-mon[75179]: osdmap e46: 3 total, 3 up, 3 in
Feb 01 14:52:25 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Feb 01 14:52:26 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Feb 01 14:52:26 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Feb 01 14:52:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Feb 01 14:52:26 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Feb 01 14:52:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Feb 01 14:52:26 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Feb 01 14:52:26 compute-0 ceph-mgr[75469]: [progress INFO root] update: starting ev 2cbf1a6f-1387-4fc6-b78d-aef03e2d80a2 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Feb 01 14:52:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Feb 01 14:52:26 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1c( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1d( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.13( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1e( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.11( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.12( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1f( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.10( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.17( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.18( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.19( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.16( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1a( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1b( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.15( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.14( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.4( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.b( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.a( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.6( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.9( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.7( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.8( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.2( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.d( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.9( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.b( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.6( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.4( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.f( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.5( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.f( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1( v 31'6 (0'0,31'6] local-lis/les=30/31 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.3( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.c( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.e( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.a( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.5( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.8( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.7( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.e( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.d( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.2( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.c( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.3( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.13( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.12( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1c( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.11( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1e( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.10( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1f( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.17( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.18( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.16( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.19( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.15( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1a( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1d( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.14( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=30/31 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1b( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1c( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1d( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1e( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.11( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1f( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.12( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.10( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.18( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.13( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.19( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.17( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.16( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1b( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1a( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.15( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.14( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.9( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.4( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.b( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.6( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.d( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.8( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.9( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.7( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.b( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.2( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.6( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.0( empty local-lis/les=46/47 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.4( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.0( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 31'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.f( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.1( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.3( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.5( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.a( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.8( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.7( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.e( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.e( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.3( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.c( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.5( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.13( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.2( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.d( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.12( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1e( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.11( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.17( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.18( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.16( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.10( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.19( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.15( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[8.14( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=30/30 les/c/f=31/31/0 sis=46) [1] r=0 lpr=46 pi=[30,46)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1d( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 47 pg[7.1b( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:26 compute-0 ceph-mon[75179]: pgmap v100: 150 pgs: 46 unknown, 104 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:26 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Feb 01 14:52:26 compute-0 ceph-mon[75179]: osdmap e47: 3 total, 3 up, 3 in
Feb 01 14:52:26 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Feb 01 14:52:27 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Feb 01 14:52:27 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Feb 01 14:52:27 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v103: 212 pgs: 62 unknown, 150 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:27 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Feb 01 14:52:27 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 01 14:52:27 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Feb 01 14:52:27 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 01 14:52:27 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Feb 01 14:52:27 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Feb 01 14:52:27 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Feb 01 14:52:27 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Feb 01 14:52:27 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Feb 01 14:52:27 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Feb 01 14:52:27 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 48 pg[9.0( v 38'483 (0'0,38'483] local-lis/les=32/33 n=210 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=48 pruub=11.519697189s) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 38'482 mlcod 38'482 active pruub 94.567817688s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:27 compute-0 ceph-mgr[75469]: [progress INFO root] update: starting ev 0c710704-685a-423d-80a4-a5bae645d96a (PG autoscaler increasing pool 11 PGs from 1 to 32)
Feb 01 14:52:27 compute-0 ceph-mgr[75469]: [progress INFO root] complete: finished ev 5f865ac9-5821-461d-bf71-3fd7b8b7d9e9 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Feb 01 14:52:27 compute-0 ceph-mgr[75469]: [progress INFO root] Completed event 5f865ac9-5821-461d-bf71-3fd7b8b7d9e9 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 9 seconds
Feb 01 14:52:27 compute-0 ceph-mgr[75469]: [progress INFO root] complete: finished ev cd02d19a-bf29-4c1f-aab0-1f16f44d0f44 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Feb 01 14:52:27 compute-0 ceph-mgr[75469]: [progress INFO root] Completed event cd02d19a-bf29-4c1f-aab0-1f16f44d0f44 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 8 seconds
Feb 01 14:52:27 compute-0 ceph-mgr[75469]: [progress INFO root] complete: finished ev f0be4e48-5081-43b2-a261-e596203beb2b (PG autoscaler increasing pool 4 PGs from 1 to 32)
Feb 01 14:52:27 compute-0 ceph-mgr[75469]: [progress INFO root] Completed event f0be4e48-5081-43b2-a261-e596203beb2b (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Feb 01 14:52:27 compute-0 ceph-mgr[75469]: [progress INFO root] complete: finished ev 9d856795-73d8-4b3a-a173-83651471199a (PG autoscaler increasing pool 5 PGs from 1 to 32)
Feb 01 14:52:27 compute-0 ceph-mgr[75469]: [progress INFO root] Completed event 9d856795-73d8-4b3a-a173-83651471199a (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Feb 01 14:52:27 compute-0 ceph-mgr[75469]: [progress INFO root] complete: finished ev fd83c393-8d35-4899-98de-8e27e64bea40 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Feb 01 14:52:27 compute-0 ceph-mgr[75469]: [progress INFO root] Completed event fd83c393-8d35-4899-98de-8e27e64bea40 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Feb 01 14:52:27 compute-0 ceph-mgr[75469]: [progress INFO root] complete: finished ev 5770ec13-3dda-4253-ab1e-ee301548257c (PG autoscaler increasing pool 7 PGs from 1 to 32)
Feb 01 14:52:27 compute-0 ceph-mgr[75469]: [progress INFO root] Completed event 5770ec13-3dda-4253-ab1e-ee301548257c (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Feb 01 14:52:27 compute-0 ceph-mgr[75469]: [progress INFO root] complete: finished ev ee10f24e-a116-4aee-ae4a-5595d10d2b8e (PG autoscaler increasing pool 8 PGs from 1 to 32)
Feb 01 14:52:27 compute-0 ceph-mgr[75469]: [progress INFO root] Completed event ee10f24e-a116-4aee-ae4a-5595d10d2b8e (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Feb 01 14:52:27 compute-0 ceph-mgr[75469]: [progress INFO root] complete: finished ev 27d05e9a-17d4-4f6a-8d65-1d5c2a8f17c3 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Feb 01 14:52:27 compute-0 ceph-mgr[75469]: [progress INFO root] Completed event 27d05e9a-17d4-4f6a-8d65-1d5c2a8f17c3 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Feb 01 14:52:27 compute-0 ceph-mgr[75469]: [progress INFO root] complete: finished ev 2cbf1a6f-1387-4fc6-b78d-aef03e2d80a2 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Feb 01 14:52:27 compute-0 ceph-mgr[75469]: [progress INFO root] Completed event 2cbf1a6f-1387-4fc6-b78d-aef03e2d80a2 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Feb 01 14:52:27 compute-0 ceph-mgr[75469]: [progress INFO root] complete: finished ev 0c710704-685a-423d-80a4-a5bae645d96a (PG autoscaler increasing pool 11 PGs from 1 to 32)
Feb 01 14:52:27 compute-0 ceph-mgr[75469]: [progress INFO root] Completed event 0c710704-685a-423d-80a4-a5bae645d96a (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Feb 01 14:52:27 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 48 pg[9.0( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=48 pruub=11.519697189s) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 38'482 mlcod 0'0 unknown pruub 94.567817688s@ mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc65600 space 0x55a03c028240 0x0~9a clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc64300 space 0x55a03c3ceb40 0x0~9a clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc5a700 space 0x55a03d1aae40 0x0~98 clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc15900 space 0x55a03d1fe540 0x0~9a clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc15780 space 0x55a03cd8ab40 0x0~9a clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc78180 space 0x55a03c03c840 0x0~9a clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc78080 space 0x55a03c49ae40 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc21280 space 0x55a03c50c540 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03ccd2580 space 0x55a03c4ee840 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc20580 space 0x55a03c515a40 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc15880 space 0x55a03c568840 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03ccd2100 space 0x55a03c49a540 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc5b500 space 0x55a03c50d440 0x0~9a clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc21b80 space 0x55a03c2f3740 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc27880 space 0x55a03c558540 0x0~98 clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc21580 space 0x55a03c569140 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc65000 space 0x55a03c001740 0x0~9a clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc26d00 space 0x55a03c50d740 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc21780 space 0x55a03c569a40 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc99080 space 0x55a03c49b740 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc27d80 space 0x55a03c510e40 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc27b00 space 0x55a03c4cdd40 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cce0d00 space 0x55a03c463a40 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc21d00 space 0x55a03c4cd440 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc15380 space 0x55a03c463740 0x0~98 clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03ccd2880 space 0x55a03c4cc240 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc26380 space 0x55a03c000b40 0x0~9a clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc66f80 space 0x55a03c515440 0x0~98 clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc64900 space 0x55a03c416b40 0x0~9a clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc26b80 space 0x55a03c58bd40 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc26a00 space 0x55a03c559140 0x0~9a clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc21100 space 0x55a03c50ce40 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc67900 space 0x55a03c32a240 0x0~9a clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc26300 space 0x55a03c416240 0x0~9a clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc20e80 space 0x55a03c302e40 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc65580 space 0x55a03c511d40 0x0~98 clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc20c80 space 0x55a03c4efa40 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc21300 space 0x55a03c3cf440 0x0~98 clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc67700 space 0x55a03c02dd40 0x0~9a clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc67c00 space 0x55a03c49bd40 0x0~9a clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc20780 space 0x55a03c4ef140 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03ccd2900 space 0x55a03c4ccb40 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc64780 space 0x55a03c00d140 0x0~9a clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cce0c00 space 0x55a03c463140 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc15400 space 0x55a03d1dbd40 0x0~9a clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03ccf0c80 space 0x55a03c02c240 0x0~9a clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc65880 space 0x55a03c50cb40 0x0~98 clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc14980 space 0x55a03c462840 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc64b00 space 0x55a03c32ba40 0x0~9a clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc26500 space 0x55a03c58a240 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc67980 space 0x55a03c02cb40 0x0~9a clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03ccd3b00 space 0x55a03c510540 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc64e80 space 0x55a03c514b40 0x0~9a clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03ccf0b80 space 0x55a03c303740 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc64500 space 0x55a03c02d440 0x0~9a clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc26780 space 0x55a03c58ab40 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03ccf0d00 space 0x55a03c302540 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc20c00 space 0x55a03c2f2840 0x0~98 clean)
Feb 01 14:52:27 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a03d07a900) split_cache   moving buffer(0x55a03cc27f80 space 0x55a03c511740 0x0~6e clean)
Feb 01 14:52:27 compute-0 ceph-mon[75179]: 4.1f scrub starts
Feb 01 14:52:27 compute-0 ceph-mon[75179]: 4.1f scrub ok
Feb 01 14:52:27 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 01 14:52:27 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 01 14:52:27 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Feb 01 14:52:27 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Feb 01 14:52:27 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Feb 01 14:52:27 compute-0 ceph-mon[75179]: osdmap e48: 3 total, 3 up, 3 in
Feb 01 14:52:28 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Feb 01 14:52:28 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Feb 01 14:52:28 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Feb 01 14:52:28 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Feb 01 14:52:28 compute-0 ceph-mgr[75469]: [progress INFO root] Writing back 16 completed events
Feb 01 14:52:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 01 14:52:28 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:52:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Feb 01 14:52:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Feb 01 14:52:28 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.15( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.14( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.17( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.16( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.11( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.10( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.13( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.12( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.d( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.c( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.f( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.9( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.b( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.2( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.e( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.a( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.8( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.6( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.3( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.7( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.5( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1a( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.4( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1b( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.18( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.19( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1e( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1f( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1c( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1d( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=32/33 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.14( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.10( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.13( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.0( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 38'482 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.12( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.2( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.e( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.a( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.5( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1a( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.18( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1e( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1c( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 49 pg[9.4( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=38'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:28 compute-0 ceph-mon[75179]: 4.1c scrub starts
Feb 01 14:52:28 compute-0 ceph-mon[75179]: 4.1c scrub ok
Feb 01 14:52:28 compute-0 ceph-mon[75179]: pgmap v103: 212 pgs: 62 unknown, 150 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:28 compute-0 ceph-mon[75179]: 3.1e scrub starts
Feb 01 14:52:28 compute-0 ceph-mon[75179]: 3.1e scrub ok
Feb 01 14:52:28 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:52:28 compute-0 ceph-mon[75179]: osdmap e49: 3 total, 3 up, 3 in
Feb 01 14:52:29 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Feb 01 14:52:29 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 48 pg[10.0( v 38'18 (0'0,38'18] local-lis/les=34/35 n=9 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=48 pruub=12.279128075s) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 38'17 mlcod 38'17 active pruub 92.903282166s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.0( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=48 pruub=12.279128075s) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 38'17 mlcod 0'0 unknown pruub 92.903282166s@ mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.1( v 38'18 (0'0,38'18] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.3( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.4( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.5( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.6( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.2( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.7( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.8( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.9( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.a( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.c( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.b( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.d( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.e( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.f( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.10( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.11( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.12( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.13( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.14( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.15( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.16( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.17( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.18( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.19( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.1a( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.1b( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.1c( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.1d( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.1e( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 49 pg[10.1f( v 38'18 lc 0'0 (0'0,38'18] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:29 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Feb 01 14:52:29 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Feb 01 14:52:29 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v106: 274 pgs: 124 unknown, 150 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:29 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Feb 01 14:52:29 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 01 14:52:29 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Feb 01 14:52:29 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Feb 01 14:52:29 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Feb 01 14:52:29 compute-0 ceph-mon[75179]: 4.7 scrub starts
Feb 01 14:52:29 compute-0 ceph-mon[75179]: 4.7 scrub ok
Feb 01 14:52:29 compute-0 ceph-mon[75179]: 3.1d scrub starts
Feb 01 14:52:29 compute-0 ceph-mon[75179]: 3.1d scrub ok
Feb 01 14:52:29 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 01 14:52:29 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.12( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.1d( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.10( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.1f( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.1e( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.1a( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.1b( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.19( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.18( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.6( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.11( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.7( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.4( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.f( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.5( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:29 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.1c( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:30 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.8( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:30 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.9( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:30 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.0( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 38'17 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:30 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.3( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:30 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.a( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:30 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.b( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:30 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.c( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:30 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.d( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:30 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.e( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:30 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.2( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:30 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.1( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:30 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.14( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:30 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.13( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:30 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.17( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:30 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.15( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:30 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 50 pg[10.16( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[34,48)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:30 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Feb 01 14:52:30 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Feb 01 14:52:30 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 50 pg[11.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=13.110879898s) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active pruub 98.637939453s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:30 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 50 pg[11.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=13.110879898s) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown pruub 98.637939453s@ mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:52:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Feb 01 14:52:30 compute-0 ceph-mon[75179]: 4.6 scrub starts
Feb 01 14:52:30 compute-0 ceph-mon[75179]: 4.6 scrub ok
Feb 01 14:52:30 compute-0 ceph-mon[75179]: pgmap v106: 274 pgs: 124 unknown, 150 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:30 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Feb 01 14:52:30 compute-0 ceph-mon[75179]: osdmap e50: 3 total, 3 up, 3 in
Feb 01 14:52:30 compute-0 ceph-mon[75179]: 2.1b scrub starts
Feb 01 14:52:30 compute-0 ceph-mon[75179]: 2.1b scrub ok
Feb 01 14:52:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Feb 01 14:52:31 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.17( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.16( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.14( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.15( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.13( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.12( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.11( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.10( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.f( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.e( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.d( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.b( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.9( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.3( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.2( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.c( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.8( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.a( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.4( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.6( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.5( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.7( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.19( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1a( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1b( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1c( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.18( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1d( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1e( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1f( empty local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.17( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.16( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.13( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.12( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.11( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.14( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.15( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.10( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.f( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.9( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.e( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.b( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=50/51 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.3( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.d( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.8( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.a( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.4( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1a( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.c( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.6( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.19( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.5( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.7( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1c( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.18( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1d( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1e( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1f( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.1b( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 51 pg[11.2( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [1] r=0 lpr=50 pi=[36,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:31 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.b scrub starts
Feb 01 14:52:31 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.b scrub ok
Feb 01 14:52:31 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v109: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:32 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Feb 01 14:52:32 compute-0 ceph-mon[75179]: osdmap e51: 3 total, 3 up, 3 in
Feb 01 14:52:32 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Feb 01 14:52:33 compute-0 ceph-mon[75179]: 4.b scrub starts
Feb 01 14:52:33 compute-0 ceph-mon[75179]: 4.b scrub ok
Feb 01 14:52:33 compute-0 ceph-mon[75179]: pgmap v109: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:33 compute-0 ceph-mon[75179]: 3.1b scrub starts
Feb 01 14:52:33 compute-0 ceph-mon[75179]: 3.1b scrub ok
Feb 01 14:52:33 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v110: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:34 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Feb 01 14:52:34 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Feb 01 14:52:35 compute-0 ceph-mon[75179]: pgmap v110: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:35 compute-0 ceph-mon[75179]: 2.1d scrub starts
Feb 01 14:52:35 compute-0 ceph-mon[75179]: 2.1d scrub ok
Feb 01 14:52:35 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Feb 01 14:52:35 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Feb 01 14:52:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:52:35 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v111: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 01 14:52:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 01 14:52:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 01 14:52:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 01 14:52:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 01 14:52:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 01 14:52:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Feb 01 14:52:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb 01 14:52:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 01 14:52:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 01 14:52:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Feb 01 14:52:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb 01 14:52:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 01 14:52:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 01 14:52:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 01 14:52:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 01 14:52:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 01 14:52:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 01 14:52:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 01 14:52:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 01 14:52:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Feb 01 14:52:36 compute-0 ceph-mon[75179]: 3.1a scrub starts
Feb 01 14:52:36 compute-0 ceph-mon[75179]: 3.1a scrub ok
Feb 01 14:52:36 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 01 14:52:36 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 01 14:52:36 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 01 14:52:36 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb 01 14:52:36 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 01 14:52:36 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb 01 14:52:36 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 01 14:52:36 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 01 14:52:36 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 01 14:52:36 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 01 14:52:36 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 01 14:52:36 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 01 14:52:36 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 01 14:52:36 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Feb 01 14:52:36 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 01 14:52:36 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Feb 01 14:52:36 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 01 14:52:36 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 01 14:52:36 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 01 14:52:36 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 01 14:52:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Feb 01 14:52:36 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.11( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.948302269s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422180176s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.945576668s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 active pruub 97.419540405s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.11( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.948179245s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422180176s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.1e( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.870571136s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.344612122s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.945492744s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 unknown NOTIFY pruub 97.419540405s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.1d( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.870471001s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.344543457s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.1e( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.870528221s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.344612122s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.1d( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.870384216s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.344543457s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.10( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.947529793s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.421836853s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.10( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.947505951s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.421836853s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.18( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.856965065s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.331375122s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.18( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.856930733s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.331375122s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.19( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.856827736s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.331352234s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.19( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.856798172s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.331352234s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.16( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.856406212s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.331367493s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.11( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.869767189s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.344749451s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.16( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.856370926s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.331367493s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.11( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.869735718s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.344749451s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.15( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.856076241s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.331306458s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.15( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.856043816s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.331306458s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.1e( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.946708679s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422088623s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.12( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.869020462s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.344444275s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.17( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.855978012s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.331413269s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.12( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868986130s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.344444275s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.17( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.855928421s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.331413269s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.13( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.869002342s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.344566345s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.13( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868979454s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.344566345s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.13( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.855505943s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.331291199s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.14( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868963242s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.344741821s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.1a( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.946277618s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422088623s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.14( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868942261s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.344741821s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.1a( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.946249962s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422088623s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.15( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868713379s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.344757080s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.19( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.946063042s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422157288s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.15( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868686676s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.344757080s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.19( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.946038246s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422157288s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.11( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.854987144s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.331245422s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.11( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.854964256s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.331245422s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.13( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.854922295s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.331291199s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.16( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868770599s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.345214844s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.16( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868749619s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.345214844s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.7( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.945592880s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422241211s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.f( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.854549408s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.331207275s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.1e( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.945425034s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422088623s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.7( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.945536613s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422241211s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.f( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.854493141s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.331207275s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.9( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868156433s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.345024109s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.6( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.945334435s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422195435s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.9( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868131638s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.345024109s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.6( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.945268631s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422195435s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.d( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.853488922s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.330566406s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[5.14( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.d( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.853464127s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.330566406s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.4( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.945116997s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422271729s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.4( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.945092201s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422271729s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.b( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.853004456s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.330314636s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.b( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.852982521s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.330314636s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.8( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.945018768s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422370911s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[5.15( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.c( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.867769241s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.345153809s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.7( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.867607117s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.345199585s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.f( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.944686890s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422286987s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.7( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.851965904s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.329574585s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.7( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.867571831s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.345199585s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.7( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.851943016s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.329574585s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.f( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.944660187s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422286987s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.8( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.944940567s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422370911s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.8( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.851665497s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.329559326s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.8( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.851644516s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.329559326s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.f( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.867300034s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.345275879s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.944346428s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 active pruub 97.422409058s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.16( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.f( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.867216110s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.345275879s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.2( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.851188660s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.329292297s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.944303513s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 unknown NOTIFY pruub 97.422409058s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.2( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.851168633s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.329292297s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.b( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.944171906s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422485352s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.5( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.870786667s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.349098206s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.11( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.3( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.850994110s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.329330444s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.b( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.944148064s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422485352s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.5( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.870760918s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.349098206s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.3( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.850971222s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.329330444s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.4( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.870546341s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.349105835s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.4( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.850893021s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.329460144s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.18( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.3( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.866640091s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.345245361s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.4( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.870521545s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.349105835s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.4( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.850867271s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.329460144s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.3( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.866616249s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.345245361s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.943723679s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 active pruub 97.422523499s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.5( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.850354195s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.329200745s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.943688393s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 unknown NOTIFY pruub 97.422523499s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.5( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.850330353s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.329200745s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[2.17( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.2( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.866353035s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.345275879s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.943520546s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 active pruub 97.422538757s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[5.1e( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.19( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.2( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.866267204s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.345275879s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.6( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.850158691s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.329193115s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.6( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.850138664s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.329193115s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.13( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.943484306s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 unknown NOTIFY pruub 97.422538757s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.1( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.866081238s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.345283508s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.1( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.943339348s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422576904s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.9( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.849864960s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.329116821s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.c( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.866742134s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.345153809s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[2.15( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.1( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.866059303s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.345283508s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.11( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.1( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.943315506s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422576904s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.9( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.849838257s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.329116821s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.a( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.849318504s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.328773499s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.2( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.943099976s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422584534s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.13( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.a( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.849292755s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.328773499s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.2( v 38'18 (0'0,38'18] local-lis/les=48/50 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.943069458s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422584534s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.1b( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.844075203s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.323715210s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.12( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[10.1a( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.942911148s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 active pruub 97.422615051s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.1b( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.844048500s) [1] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.323715210s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.1c( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.843911171s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.323646545s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.942868233s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 unknown NOTIFY pruub 97.422615051s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.1c( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.843870163s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.323646545s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.942697525s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 active pruub 97.422653198s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.1d( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.843700409s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.323715210s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[10.19( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.942648888s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 38'18 unknown NOTIFY pruub 97.422653198s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.1a( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.869227409s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.349296570s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.1d( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.843660355s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.323715210s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.1a( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.869144440s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.349296570s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.19( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868545532s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.349266052s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.1f( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.850687027s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 active pruub 98.331428528s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.19( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868513107s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.349266052s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.16( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.942833900s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.423614502s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.1c( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848082542s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.527412415s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.863107681s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 active pruub 107.542610168s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.8( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848316193s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.527832031s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.7( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847899437s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.527442932s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.863079071s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.542610168s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.8( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848278999s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.527832031s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.7( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847791672s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.527442932s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.859298706s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 active pruub 107.539077759s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.859272957s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.539077759s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.1b( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847864151s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.527755737s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.16( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848148346s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.528121948s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.1b( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847785950s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.527755737s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848121643s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.528121948s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.862565994s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 active pruub 107.542617798s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.862540245s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.542617798s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.5( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847656250s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.527854919s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.1a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847715378s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.527954102s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.5( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847633362s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.527854919s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.858572006s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 active pruub 107.538825989s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.1a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847688675s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.527954102s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.858549118s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.538825989s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.9( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[10.6( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.16( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.942800522s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.423614502s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.18( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868285179s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 100.349220276s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[5.18( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.868234634s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 100.349220276s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.13( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.942672729s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422653198s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.9( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847630501s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.527999878s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.9( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847601891s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.527999878s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.1c( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847013474s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.527412415s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.4( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847607613s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.528076172s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.4( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847585678s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.528076172s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.858558655s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 active pruub 107.539123535s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.858534813s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.539123535s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.858412743s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 active pruub 107.539138794s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.1( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847568512s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.528335571s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.858382225s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.539138794s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.1( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.847545624s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.528335571s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.2( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.849721909s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.530708313s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.2( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.849698067s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.530708313s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[2.1f( empty local-lis/les=40/43 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52 pruub=10.850545883s) [0] r=-1 lpr=52 pi=[40,52)/1 crt=0'0 unknown NOTIFY pruub 98.331428528s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.13( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.941585541s) [1] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422653198s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[2.d( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.17( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.941538811s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 active pruub 97.422698975s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[10.17( v 38'18 (0'0,38'18] local-lis/les=48/50 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52 pruub=9.941518784s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 unknown NOTIFY pruub 97.422698975s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[4.1b( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[4.a( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.f( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[2.7( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[10.f( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.d( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.849439621s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.530769348s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.858036041s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 active pruub 107.539375305s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[4.1a( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.858005524s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.539375305s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.e( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.849103928s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.530548096s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.e( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.849063873s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.530548096s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[10.11( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.d( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.849410057s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.530769348s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.857892990s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 active pruub 107.539421082s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.857870102s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.539421082s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.f( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848866463s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.530563354s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.11( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.849171638s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.530891418s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[10.10( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[10.12( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[4.1c( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.10( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.849019051s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.530761719s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.11( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.849148750s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.530891418s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.f( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848813057s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.530563354s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.1d( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.10( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848981857s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.530761719s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.12( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848928452s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.530754089s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.12( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848891258s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.530754089s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.13( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848884583s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.530899048s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.14( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848855972s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.530906677s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.13( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848856926s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.530899048s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.14( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848832130s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.530906677s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[4.1( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[4.e( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.18( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848437309s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 105.530899048s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[4.18( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.848410606s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 105.530899048s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.f( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[10.7( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[10.4( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.b( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.14( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.871395111s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.056854248s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.876036644s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.061561584s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.14( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.871347427s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.056854248s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.876002312s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.061561584s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[10.1e( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.1e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.842407227s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.028205872s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.1e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.842374802s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.028205872s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.1a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.870972633s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.056861877s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.1a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.870937347s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.056861877s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[5.7( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.15( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.870798111s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.056823730s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.15( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.870768547s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.056823730s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[10.8( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.951157570s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.137321472s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.951132774s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.137321472s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.1d( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.842537880s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.028846741s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.1d( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.842505455s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.028846741s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.1b( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.870776176s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.057128906s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.1b( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.870702744s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.057128906s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[10.b( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.8( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.1f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.837202072s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.023963928s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.1f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.837156296s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.023963928s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[2.3( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[10.9( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.2( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[5.5( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.949327469s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.137107849s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.949280739s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.137107849s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.872479439s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.060394287s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.872416496s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.060394287s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.18( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.868700981s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.056846619s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.18( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.868659973s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.056846619s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.1b( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.840605736s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.028900146s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.1f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.868402481s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.056755066s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.1f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.868370056s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.056755066s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.1b( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.840518951s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.028900146s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.10( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.868306160s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.056808472s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.10( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.868268967s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.056808472s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.872969627s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.061561584s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.872942924s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.061561584s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.947257996s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.136054993s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.947229385s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.136054993s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.11( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.867700577s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.056541443s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.948195457s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.137062073s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.947972298s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.137062073s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[5.4( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[5.3( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[10.d( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[5.2( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[10.e( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[4.11( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.947851181s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.137062073s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.12( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.867271423s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.056533813s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.13( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.872364998s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.061576843s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.947780609s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.137062073s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.12( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.867238045s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.056533813s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.13( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.872241974s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.061576843s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.11( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.867666245s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.056541443s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.947936058s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.137374878s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.18( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.841079712s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.030624390s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.947895050s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.137374878s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.18( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.841053963s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.030624390s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.1c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.866762161s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.056442261s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.3( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.866786957s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.056556702s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.3( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.866764069s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.056556702s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[10.1( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.947862625s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.137649536s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.c( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.866366386s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.056198120s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.c( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.866345406s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.056198120s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.947803497s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.137649536s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.871632576s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.061660767s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.871613503s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.061660767s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.947594643s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.137657166s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.7( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.840612411s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.030738831s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.947533607s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.137657166s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.1c( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.2( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.866209030s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.056358337s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.7( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.840570450s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.030738831s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.2( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.866174698s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.056358337s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.6( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.840394974s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.030685425s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.1c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.866731644s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.056442261s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.6( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.840289116s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.030685425s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[2.4( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.d( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.865409851s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.056442261s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.946555138s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.137657166s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.d( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.865369797s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.056442261s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[10.15( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.1( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.865055084s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.056182861s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.946516037s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.137657166s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.1( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.865015030s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.056182861s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.e( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.860674858s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.052162170s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.870164871s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.061729431s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[4.13( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.e( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.860606194s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.052162170s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.946076393s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.137687683s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.870128632s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.061729431s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.946034431s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.137687683s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.5( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.839039803s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.030761719s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.5( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.838968277s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.030761719s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.3( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.839347839s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.031219482s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.3( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.839310646s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.031219482s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.873323441s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.065277100s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.873285294s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.065277100s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.945528984s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.137664795s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.1( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.838544846s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.030708313s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.5( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.859667778s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.051811218s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.1( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.838519096s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.030708313s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.5( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.859598160s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.051811218s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.8( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.839510918s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.031959534s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.869253159s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.061729431s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.8( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.839484215s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.031959534s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.946880341s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139411926s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.869215965s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.061729431s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.946842194s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139411926s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.859191895s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.051849365s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.945391655s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.137664795s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.859155655s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.051849365s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.e( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.859388351s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.052131653s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.e( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.859363556s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.052131653s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.a( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.837987900s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.030799866s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.1d( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[10.16( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.a( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.837941170s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.030799866s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[2.1f( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[10.17( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[2.5( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.14( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[4.18( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.944193840s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.137748718s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.944156647s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.137748718s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.857801437s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.051589966s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.857769012s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.051589966s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.867748260s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.061843872s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.867710114s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.061843872s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.f( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.857240677s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.051589966s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.f( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.857217789s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.051589966s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.944826126s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139320374s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.4( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.856967926s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.051498413s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.4( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.856949806s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.051498413s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.944797516s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139320374s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.b( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.856684685s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.051376343s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.b( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.856664658s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.051376343s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.6( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.856701851s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.051490784s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.6( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.856669426s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.051490784s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[2.6( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[7.1b( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.1f( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[11.14( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.9( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.855905533s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.051216125s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.9( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.855861664s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.051216125s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.943911552s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139404297s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.943844795s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139404297s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.1( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[3.1e( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[7.1a( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[7.18( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.c( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[7.1f( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.1b( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[8.15( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.15( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[2.9( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[2.a( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.10( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[3.1d( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[10.2( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.11( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[2.1b( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[11.17( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[8.12( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[10.14( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[11.10( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.1a( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.19( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[7.3( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.c( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[5.18( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[11.f( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[8.11( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[10.13( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.12( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[11.e( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[6.5( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.6( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[3.18( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.9( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.826411247s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.030937195s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[4.8( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.9( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.826364517s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.030937195s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[7.2( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.2( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.846562386s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.051414490s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.2( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.846531868s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.051414490s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.859771729s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.064926147s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.c( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.825776100s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.030952454s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.934087753s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139343262s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.c( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.825731277s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.030952454s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.934059143s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139343262s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.e( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.9( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.843759537s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.049308777s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.9( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.843726158s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.049308777s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[3.7( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.6( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.845353127s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.051208496s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.6( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.845333099s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.051208496s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.858901978s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.065002441s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.933281898s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139411926s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.858864784s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.065002441s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.933259010s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139411926s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.842927933s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.049301147s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[7.1c( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.842905998s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.049301147s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.824503899s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.031005859s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.8( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.844851494s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.051193237s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.824473381s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.031005859s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[4.7( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.8( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.844413757s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.051193237s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[6.9( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.858564377s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.064926147s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.3( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[6.7( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.1( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[4.5( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.a( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[6.b( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[7.f( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[4.9( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[4.4( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[8.d( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.d( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[6.1( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[6.3( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[4.2( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[6.f( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[4.d( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[6.d( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[4.f( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.f( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[4.10( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[4.12( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[4.14( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[7.4( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.b( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[7.6( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.9( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[11.1( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.4( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.834575653s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.049278259s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.4( v 31'6 (0'0,31'6] local-lis/les=46/47 n=1 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.834532738s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.049278259s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.816321373s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.031219482s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.816288948s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.031219482s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[7.1( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.849802017s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 38'483 active pruub 100.064933777s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.849752426s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 38'483 unknown NOTIFY pruub 100.064933777s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.9( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.924089432s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139511108s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.c( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.924052238s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139511108s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[11.4( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[7.9( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.6( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.1b( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.833435059s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.049072266s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.1b( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.833395004s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.049072266s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.923677444s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139442444s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.923585892s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139442444s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.11( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.815390587s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.031356812s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.15( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.833096504s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.049079895s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.11( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.815352440s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.031356812s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.15( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.833050728s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.049079895s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.1a( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.832967758s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.049087524s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.848978996s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.065254211s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.b( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.923089981s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139411926s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.848951340s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.065254211s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.923061371s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139411926s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.923016548s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139610291s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.12( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.816305161s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.032897949s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.922987938s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139610291s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.18( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.832287788s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.048957825s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.12( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.816262245s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.032897949s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[11.6( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.18( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.832257271s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.048957825s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.848461151s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.065414429s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.848436356s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.065414429s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.922493935s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139488220s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.922467232s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139488220s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.1a( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.832938194s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.049087524s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[3.5( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.1f( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.831164360s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.048934937s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.1f( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.831125259s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.048934937s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.847593307s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.065498352s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.11( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.830725670s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.048667908s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.847569466s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.065498352s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.15( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.813379288s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.031364441s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.11( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.830689430s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.048667908s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.15( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.813345909s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.031364441s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.921483994s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139564514s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.921455383s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139564514s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.16( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.813126564s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.031387329s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.16( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.813084602s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.031387329s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.921098709s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 active pruub 102.139610291s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.13( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.830430984s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 106.049003601s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=10.921034813s) [2] r=-1 lpr=52 pi=[50,52)/1 crt=0'0 unknown NOTIFY pruub 102.139610291s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[7.13( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.830393791s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 106.049003601s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.846801758s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 active pruub 100.065437317s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=8.846771240s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 100.065437317s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.1c( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.829882622s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.048683167s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.1c( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.829850197s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.048683167s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[7.5( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.2( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.1d( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.829357147s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 active pruub 106.048583984s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[8.1d( v 31'6 (0'0,31'6] local-lis/les=46/47 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=14.827646255s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 unknown NOTIFY pruub 106.048583984s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.9( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.17( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.809946060s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 102.031440735s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:36 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 52 pg[3.17( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=10.809902191s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 102.031440735s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[3.8( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[7.e( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[7.c( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.3( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.f( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[11.19( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.8( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.12( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.18( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[8.2( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[7.a( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[3.e( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[7.8( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.1a( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[8.4( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.1f( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[8.1b( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.18( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[3.11( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.15( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[7.13( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[7.15( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.1a( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.1b( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.1c( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[7.11( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.1e( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[3.16( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[11.1f( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 52 pg[8.1c( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[3.17( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 52 pg[8.1d( empty local-lis/les=0/0 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:36 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Feb 01 14:52:36 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Feb 01 14:52:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Feb 01 14:52:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Feb 01 14:52:37 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.13( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.13( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:37 compute-0 ceph-mon[75179]: pgmap v111: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:37 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 01 14:52:37 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 01 14:52:37 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 01 14:52:37 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Feb 01 14:52:37 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 01 14:52:37 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Feb 01 14:52:37 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 01 14:52:37 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 01 14:52:37 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 01 14:52:37 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 01 14:52:37 compute-0 ceph-mon[75179]: osdmap e52: 3 total, 3 up, 3 in
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 38'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 38'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.1( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.d( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.d( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.1( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.9( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.9( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.b( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.b( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.11( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.11( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.5( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.5( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.3( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.3( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.1d( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.1d( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.1b( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[9.1b( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[8.15( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[7.1a( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.15( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[3.1e( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[4.18( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[4.1b( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[3.1d( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.3( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[4.1a( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[8.11( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[3.8( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[7.c( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[3.7( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.d( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[3.5( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[7.1( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.b( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[4.e( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[8.2( v 31'6 (0'0,31'6] local-lis/les=52/53 n=1 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[7.2( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[4.1( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[8.d( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[7.5( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.2( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.9( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[4.a( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[8.4( v 31'6 (0'0,31'6] local-lis/les=52/53 n=1 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[7.8( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.19( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[7.a( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.18( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[10.14( v 50'19 lc 35'7 (0'0,50'19] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.8( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.1a( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.1d( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[10.12( v 50'19 lc 38'17 (0'0,50'19] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[2.1b( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[10.13( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[10.10( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[10.11( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.1( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[2.6( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=52/53 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=32'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[10.f( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[2.7( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[4.2( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[2.4( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[11.17( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[7.1b( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.14( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[10.16( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.11( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[5.14( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.13( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[4.f( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[6.d( v 32'39 lc 31'13 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[4.d( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[6.f( v 32'39 lc 31'1 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[2.a( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[10.2( v 38'18 (0'0,38'18] local-lis/les=52/53 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=52/53 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.c( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[2.5( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[4.7( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[4.4( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[6.5( v 32'39 lc 31'11 (0'0,32'39] local-lis/les=52/53 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[6.7( v 32'39 lc 31'21 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[10.b( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[2.3( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[4.5( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.f( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[2.9( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[2.d( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[4.9( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.1f( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[5.15( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[10.1( v 38'18 (0'0,38'18] local-lis/les=52/53 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.8( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.a( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[7.e( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[3.11( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.b( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[7.15( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.9( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[10.6( v 38'18 (0'0,38'18] local-lis/les=52/53 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[10.19( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.16( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.12( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[2.15( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[4.12( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[4.8( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[2.17( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[4.10( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.11( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[10.1a( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [1] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[5.13( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 53 pg[4.14( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[11.f( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.18( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.c( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.1b( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.1a( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[8.1b( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.12( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[4.13( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.1c( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[3.16( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.1e( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[8.1c( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.1f( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[4.11( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[11.11( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [2] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[7.1c( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[4.1c( empty local-lis/les=52/53 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[3.18( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[8.12( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[3.e( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 53 pg[7.11( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[10.e( v 50'19 lc 35'4 (0'0,50'19] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.e( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[5.3( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.6( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[11.e( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.f( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[5.2( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[10.d( v 50'19 lc 35'5 (0'0,50'19] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[10.17( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.1f( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.3( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[5.5( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.2( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.f( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[10.7( v 38'18 (0'0,38'18] local-lis/les=52/53 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.1c( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[5.4( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[10.4( v 38'18 (0'0,38'18] local-lis/les=52/53 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[7.18( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[11.14( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.1d( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[10.15( v 50'19 lc 35'3 (0'0,50'19] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.1( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[10.8( v 38'18 (0'0,38'18] local-lis/les=52/53 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[5.7( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.b( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[10.9( v 50'19 lc 35'8 (0'0,50'19] local-lis/les=52/53 n=1 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.10( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.1b( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[7.1f( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[7.4( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.19( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[5.1e( empty local-lis/les=52/53 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.18( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.9( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[7.6( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.c( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[11.4( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[7.9( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[11.1( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.f( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.6( v 31'6 lc 0'0 (0'0,31'6] local-lis/les=52/53 n=1 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.9( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[11.6( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[7.f( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[7.3( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.17( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[7.13( empty local-lis/les=52/53 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[2.16( empty local-lis/les=52/53 n=0 ec=40/17 lis/c=40/40 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[10.1e( v 38'18 (0'0,38'18] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=38'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.1d( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.15( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.1f( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.18( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[11.10( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[3.12( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[8.1a( v 31'6 (0'0,31'6] local-lis/les=52/53 n=0 ec=46/30 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=31'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 53 pg[11.19( empty local-lis/les=52/53 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:37 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Feb 01 14:52:37 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Feb 01 14:52:37 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v114: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Feb 01 14:52:37 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb 01 14:52:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Feb 01 14:52:37 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb 01 14:52:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Feb 01 14:52:38 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Feb 01 14:52:38 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Feb 01 14:52:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Feb 01 14:52:38 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Feb 01 14:52:38 compute-0 ceph-mon[75179]: 4.1e scrub starts
Feb 01 14:52:38 compute-0 ceph-mon[75179]: 4.1e scrub ok
Feb 01 14:52:38 compute-0 ceph-mon[75179]: osdmap e53: 3 total, 3 up, 3 in
Feb 01 14:52:38 compute-0 ceph-mon[75179]: 11.16 scrub starts
Feb 01 14:52:38 compute-0 ceph-mon[75179]: 11.16 scrub ok
Feb 01 14:52:38 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb 01 14:52:38 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb 01 14:52:38 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Feb 01 14:52:38 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Feb 01 14:52:38 compute-0 ceph-mgr[75469]: [progress INFO root] Completed event 23ca1801-a40f-405c-bbf0-4b566eca4f29 (Global Recovery Event) in 15 seconds
Feb 01 14:52:38 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 54 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54 pruub=10.395108223s) [1] r=-1 lpr=54 pi=[44,54)/1 crt=32'39 lcod 0'0 active pruub 107.536727905s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:38 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 54 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54 pruub=10.395028114s) [1] r=-1 lpr=54 pi=[44,54)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.536727905s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:38 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[6.a( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:38 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 54 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54 pruub=10.396500587s) [1] r=-1 lpr=54 pi=[44,54)/1 crt=32'39 lcod 0'0 active pruub 107.539016724s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:38 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 54 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54 pruub=10.396461487s) [1] r=-1 lpr=54 pi=[44,54)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.539016724s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:38 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 54 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54 pruub=10.396233559s) [1] r=-1 lpr=54 pi=[44,54)/1 crt=32'39 lcod 0'0 active pruub 107.539131165s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:38 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 54 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54 pruub=10.396144867s) [1] r=-1 lpr=54 pi=[44,54)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.539131165s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:38 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 54 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54 pruub=10.396371841s) [1] r=-1 lpr=54 pi=[44,54)/1 crt=32'39 lcod 0'0 active pruub 107.539390564s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:38 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 54 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54 pruub=10.396327019s) [1] r=-1 lpr=54 pi=[44,54)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 107.539390564s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:38 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[6.6( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:38 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[6.2( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:38 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[6.e( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:38 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:38 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:38 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:38 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:38 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:38 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:38 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:38 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:38 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:38 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:38 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:38 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:38 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:38 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.13( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:38 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:38 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 54 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=53/54 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=49'484 lcod 38'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:39 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Feb 01 14:52:39 compute-0 ceph-mon[75179]: 2.1a scrub starts
Feb 01 14:52:39 compute-0 ceph-mon[75179]: 2.1a scrub ok
Feb 01 14:52:39 compute-0 ceph-mon[75179]: pgmap v114: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Feb 01 14:52:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Feb 01 14:52:39 compute-0 ceph-mon[75179]: osdmap e54: 3 total, 3 up, 3 in
Feb 01 14:52:39 compute-0 ceph-mon[75179]: 5.1f scrub starts
Feb 01 14:52:39 compute-0 ceph-mon[75179]: 5.1f scrub ok
Feb 01 14:52:39 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Feb 01 14:52:39 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Feb 01 14:52:39 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.421211243s) [0] async=[0] r=-1 lpr=55 pi=[48,55)/1 crt=38'483 lcod 0'0 active pruub 109.648559570s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:39 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.420630455s) [0] r=-1 lpr=55 pi=[48,55)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.648559570s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:39 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.420741081s) [0] async=[0] r=-1 lpr=55 pi=[48,55)/1 crt=38'483 lcod 0'0 active pruub 109.649414062s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:39 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.420618057s) [0] r=-1 lpr=55 pi=[48,55)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.649414062s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:39 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.419396400s) [0] async=[0] r=-1 lpr=55 pi=[48,55)/1 crt=38'483 lcod 0'0 active pruub 109.648750305s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:39 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.419229507s) [0] r=-1 lpr=55 pi=[48,55)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.648750305s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:39 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.418274879s) [0] async=[0] r=-1 lpr=55 pi=[48,55)/1 crt=38'483 lcod 0'0 active pruub 109.648757935s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:39 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.418058395s) [0] r=-1 lpr=55 pi=[48,55)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.648757935s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:39 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=54/55 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:39 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.417829514s) [0] async=[0] r=-1 lpr=55 pi=[48,55)/1 crt=38'483 lcod 0'0 active pruub 109.648933411s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:39 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.417583466s) [0] r=-1 lpr=55 pi=[48,55)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.648933411s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:39 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 55 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:39 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 55 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:39 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 55 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:39 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 55 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:39 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 55 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:39 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 55 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:39 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 55 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:39 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 55 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:39 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 55 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:39 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 55 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:39 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=54/55 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:39 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=54/55 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:39 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 55 pg[6.e( v 32'39 lc 31'19 (0'0,32'39] local-lis/les=54/55 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=54) [1] r=0 lpr=54 pi=[44,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:39 compute-0 sudo[98238]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiwqycsysydluazyppjtzxrztppsffyt ; /usr/bin/python3'
Feb 01 14:52:39 compute-0 sudo[98238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:52:39 compute-0 python3[98240]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:52:39 compute-0 podman[98241]: 2026-02-01 14:52:39.360744248 +0000 UTC m=+0.054764255 container create ec8f1c0660064f4401c4bc2616ebfd5f4ee17ae53ab970841bede9a1552a9664 (image=quay.io/ceph/ceph:v20, name=vigorous_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 01 14:52:39 compute-0 systemd[76558]: Starting Mark boot as successful...
Feb 01 14:52:39 compute-0 systemd[76558]: Finished Mark boot as successful.
Feb 01 14:52:39 compute-0 systemd[1]: Started libpod-conmon-ec8f1c0660064f4401c4bc2616ebfd5f4ee17ae53ab970841bede9a1552a9664.scope.
Feb 01 14:52:39 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:52:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7d4173c4c08e4e6bd12c15158569a4af7f1e36f28b009a5fbd8ea5ab28426d1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:52:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7d4173c4c08e4e6bd12c15158569a4af7f1e36f28b009a5fbd8ea5ab28426d1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:52:39 compute-0 podman[98241]: 2026-02-01 14:52:39.341546327 +0000 UTC m=+0.035566314 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:52:39 compute-0 podman[98241]: 2026-02-01 14:52:39.437624985 +0000 UTC m=+0.131644992 container init ec8f1c0660064f4401c4bc2616ebfd5f4ee17ae53ab970841bede9a1552a9664 (image=quay.io/ceph/ceph:v20, name=vigorous_lumiere, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 01 14:52:39 compute-0 podman[98241]: 2026-02-01 14:52:39.442257056 +0000 UTC m=+0.136277043 container start ec8f1c0660064f4401c4bc2616ebfd5f4ee17ae53ab970841bede9a1552a9664 (image=quay.io/ceph/ceph:v20, name=vigorous_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 01 14:52:39 compute-0 podman[98241]: 2026-02-01 14:52:39.445655112 +0000 UTC m=+0.139675109 container attach ec8f1c0660064f4401c4bc2616ebfd5f4ee17ae53ab970841bede9a1552a9664 (image=quay.io/ceph/ceph:v20, name=vigorous_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 01 14:52:39 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v117: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:39 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Feb 01 14:52:39 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb 01 14:52:39 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Feb 01 14:52:39 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb 01 14:52:39 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Feb 01 14:52:40 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Feb 01 14:52:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Feb 01 14:52:40 compute-0 ceph-mon[75179]: osdmap e55: 3 total, 3 up, 3 in
Feb 01 14:52:40 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb 01 14:52:40 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb 01 14:52:40 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Feb 01 14:52:40 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Feb 01 14:52:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Feb 01 14:52:40 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.411252975s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 active pruub 109.649475098s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.411259651s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 active pruub 109.649597168s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.411093712s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.649475098s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.411152840s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.649597168s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.13( v 55'484 (0'0,55'484] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.412872314s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 38'483 active pruub 109.651443481s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.13( v 55'484 (0'0,55'484] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.412771225s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 38'483 unknown NOTIFY pruub 109.651443481s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.410440445s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 active pruub 109.649291992s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.410350800s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.649291992s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.13( v 55'484 (0'0,55'484] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.13( v 55'484 (0'0,55'484] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.409917831s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 active pruub 109.649002075s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.409841537s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.649002075s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.409867287s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 active pruub 109.649505615s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.409816742s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.649505615s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=49'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=49'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=52/53 n=2 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=12.994630814s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=32'39 active pruub 108.235671997s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=52/53 n=2 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=12.994583130s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=32'39 unknown NOTIFY pruub 108.235671997s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=12.997785568s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=32'39 active pruub 108.240180969s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=12.997887611s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=32'39 active pruub 108.240409851s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=12.997598648s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=32'39 unknown NOTIFY pruub 108.240180969s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=12.997559547s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=32'39 unknown NOTIFY pruub 108.240409851s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.408428192s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 active pruub 109.651412964s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=13.002901077s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=32'39 active pruub 108.245918274s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.408333778s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.651412964s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56 pruub=13.002845764s) [0] r=-1 lpr=56 pi=[52,56)/1 crt=32'39 unknown NOTIFY pruub 108.245918274s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.408208847s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=49'484 lcod 55'485 active pruub 109.651405334s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=53/54 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.408074379s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=49'484 lcod 55'485 unknown NOTIFY pruub 109.651405334s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.405170441s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 active pruub 109.648628235s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.405096054s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.648628235s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[6.3( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[6.f( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.407301903s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 active pruub 109.651481628s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.407196999s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.651481628s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.404499054s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 active pruub 109.648864746s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:40 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 56 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=53/54 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.404430389s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 109.648864746s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[6.7( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[6.b( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.9( v 38'483 (0'0,38'483] local-lis/les=55/56 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.1( v 38'483 (0'0,38'483] local-lis/les=55/56 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.19( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 56 pg[9.3( v 38'483 (0'0,38'483] local-lis/les=55/56 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:52:40 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Feb 01 14:52:40 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Feb 01 14:52:40 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Feb 01 14:52:40 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Feb 01 14:52:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Feb 01 14:52:41 compute-0 ceph-mon[75179]: pgmap v117: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:52:41 compute-0 ceph-mon[75179]: 7.19 scrub starts
Feb 01 14:52:41 compute-0 ceph-mon[75179]: 7.19 scrub ok
Feb 01 14:52:41 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Feb 01 14:52:41 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Feb 01 14:52:41 compute-0 ceph-mon[75179]: osdmap e56: 3 total, 3 up, 3 in
Feb 01 14:52:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Feb 01 14:52:41 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Feb 01 14:52:41 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[9.13( v 55'484 (0'0,55'484] local-lis/les=56/57 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=55'484 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:41 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=56/57 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=55'486 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:41 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[9.11( v 38'483 (0'0,38'483] local-lis/les=56/57 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:41 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[9.17( v 38'483 (0'0,38'483] local-lis/les=56/57 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:41 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[9.7( v 38'483 (0'0,38'483] local-lis/les=56/57 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:41 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=56/57 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:41 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[9.b( v 38'483 (0'0,38'483] local-lis/les=56/57 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:41 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[6.7( v 32'39 lc 31'21 (0'0,32'39] local-lis/les=56/57 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:41 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=56/57 n=2 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=32'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:41 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[9.f( v 38'483 (0'0,38'483] local-lis/les=56/57 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:41 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[9.1d( v 38'483 (0'0,38'483] local-lis/les=56/57 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:41 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=56/57 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:41 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[9.1b( v 38'483 (0'0,38'483] local-lis/les=56/57 n=6 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:41 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[9.d( v 38'483 (0'0,38'483] local-lis/les=56/57 n=7 ec=48/32 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:41 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 57 pg[6.f( v 32'39 lc 31'1 (0'0,32'39] local-lis/les=56/57 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:41 compute-0 vigorous_lumiere[98258]: could not fetch user info: no user info saved
Feb 01 14:52:41 compute-0 systemd[1]: libpod-ec8f1c0660064f4401c4bc2616ebfd5f4ee17ae53ab970841bede9a1552a9664.scope: Deactivated successfully.
Feb 01 14:52:41 compute-0 podman[98241]: 2026-02-01 14:52:41.222360575 +0000 UTC m=+1.916380582 container died ec8f1c0660064f4401c4bc2616ebfd5f4ee17ae53ab970841bede9a1552a9664 (image=quay.io/ceph/ceph:v20, name=vigorous_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:52:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7d4173c4c08e4e6bd12c15158569a4af7f1e36f28b009a5fbd8ea5ab28426d1-merged.mount: Deactivated successfully.
Feb 01 14:52:41 compute-0 podman[98241]: 2026-02-01 14:52:41.263116334 +0000 UTC m=+1.957136351 container remove ec8f1c0660064f4401c4bc2616ebfd5f4ee17ae53ab970841bede9a1552a9664 (image=quay.io/ceph/ceph:v20, name=vigorous_lumiere, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:52:41 compute-0 systemd[1]: libpod-conmon-ec8f1c0660064f4401c4bc2616ebfd5f4ee17ae53ab970841bede9a1552a9664.scope: Deactivated successfully.
Feb 01 14:52:41 compute-0 sudo[98238]: pam_unix(sudo:session): session closed for user root
Feb 01 14:52:41 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Feb 01 14:52:41 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Feb 01 14:52:41 compute-0 sudo[98379]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhkecsaynfbhymdfmntzajpuvoqdscrt ; /usr/bin/python3'
Feb 01 14:52:41 compute-0 sudo[98379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:52:41 compute-0 python3[98381]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:52:41 compute-0 podman[98382]: 2026-02-01 14:52:41.595344909 +0000 UTC m=+0.037792676 container create 29bc7e957b9d93144dea148870c1b2cfac7acc459b11d571c7eea900c9e39bd6 (image=quay.io/ceph/ceph:v20, name=busy_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Feb 01 14:52:41 compute-0 systemd[1]: Started libpod-conmon-29bc7e957b9d93144dea148870c1b2cfac7acc459b11d571c7eea900c9e39bd6.scope.
Feb 01 14:52:41 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/222832f23944c2d629283b86c55eeebf42691ba138670fd4d903b14c0f5dabd4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/222832f23944c2d629283b86c55eeebf42691ba138670fd4d903b14c0f5dabd4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:52:41 compute-0 podman[98382]: 2026-02-01 14:52:41.67094231 +0000 UTC m=+0.113390097 container init 29bc7e957b9d93144dea148870c1b2cfac7acc459b11d571c7eea900c9e39bd6 (image=quay.io/ceph/ceph:v20, name=busy_allen, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:52:41 compute-0 podman[98382]: 2026-02-01 14:52:41.581845879 +0000 UTC m=+0.024293646 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 01 14:52:41 compute-0 podman[98382]: 2026-02-01 14:52:41.676321602 +0000 UTC m=+0.118769409 container start 29bc7e957b9d93144dea148870c1b2cfac7acc459b11d571c7eea900c9e39bd6 (image=quay.io/ceph/ceph:v20, name=busy_allen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:52:41 compute-0 podman[98382]: 2026-02-01 14:52:41.680090578 +0000 UTC m=+0.122538345 container attach 29bc7e957b9d93144dea148870c1b2cfac7acc459b11d571c7eea900c9e39bd6 (image=quay.io/ceph/ceph:v20, name=busy_allen, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 01 14:52:41 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v120: 305 pgs: 15 peering, 290 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 12 op/s; 1.5 KiB/s, 2 keys/s, 30 objects/s recovering
Feb 01 14:52:41 compute-0 busy_allen[98397]: {
Feb 01 14:52:41 compute-0 busy_allen[98397]:     "user_id": "openstack",
Feb 01 14:52:41 compute-0 busy_allen[98397]:     "display_name": "openstack",
Feb 01 14:52:41 compute-0 busy_allen[98397]:     "email": "",
Feb 01 14:52:41 compute-0 busy_allen[98397]:     "suspended": 0,
Feb 01 14:52:41 compute-0 busy_allen[98397]:     "max_buckets": 1000,
Feb 01 14:52:41 compute-0 busy_allen[98397]:     "subusers": [],
Feb 01 14:52:41 compute-0 busy_allen[98397]:     "keys": [
Feb 01 14:52:41 compute-0 busy_allen[98397]:         {
Feb 01 14:52:41 compute-0 busy_allen[98397]:             "user": "openstack",
Feb 01 14:52:41 compute-0 busy_allen[98397]:             "access_key": "HJSLQLIKXTYXGHFHD0W0",
Feb 01 14:52:41 compute-0 busy_allen[98397]:             "secret_key": "QD2Ghu8DgZL7G7Ajq8urcmkK9esvUbwgihgz5x9I",
Feb 01 14:52:41 compute-0 busy_allen[98397]:             "active": true,
Feb 01 14:52:41 compute-0 busy_allen[98397]:             "create_date": "2026-02-01T14:52:41.859752Z"
Feb 01 14:52:41 compute-0 busy_allen[98397]:         }
Feb 01 14:52:41 compute-0 busy_allen[98397]:     ],
Feb 01 14:52:41 compute-0 busy_allen[98397]:     "swift_keys": [],
Feb 01 14:52:41 compute-0 busy_allen[98397]:     "caps": [],
Feb 01 14:52:41 compute-0 busy_allen[98397]:     "op_mask": "read, write, delete",
Feb 01 14:52:41 compute-0 busy_allen[98397]:     "default_placement": "",
Feb 01 14:52:41 compute-0 busy_allen[98397]:     "default_storage_class": "",
Feb 01 14:52:41 compute-0 busy_allen[98397]:     "placement_tags": [],
Feb 01 14:52:41 compute-0 busy_allen[98397]:     "bucket_quota": {
Feb 01 14:52:41 compute-0 busy_allen[98397]:         "enabled": false,
Feb 01 14:52:41 compute-0 busy_allen[98397]:         "check_on_raw": false,
Feb 01 14:52:41 compute-0 busy_allen[98397]:         "max_size": -1,
Feb 01 14:52:41 compute-0 busy_allen[98397]:         "max_size_kb": 0,
Feb 01 14:52:41 compute-0 busy_allen[98397]:         "max_objects": -1
Feb 01 14:52:41 compute-0 busy_allen[98397]:     },
Feb 01 14:52:41 compute-0 busy_allen[98397]:     "user_quota": {
Feb 01 14:52:41 compute-0 busy_allen[98397]:         "enabled": false,
Feb 01 14:52:41 compute-0 busy_allen[98397]:         "check_on_raw": false,
Feb 01 14:52:41 compute-0 busy_allen[98397]:         "max_size": -1,
Feb 01 14:52:41 compute-0 busy_allen[98397]:         "max_size_kb": 0,
Feb 01 14:52:41 compute-0 busy_allen[98397]:         "max_objects": -1
Feb 01 14:52:41 compute-0 busy_allen[98397]:     },
Feb 01 14:52:41 compute-0 busy_allen[98397]:     "temp_url_keys": [],
Feb 01 14:52:41 compute-0 busy_allen[98397]:     "type": "rgw",
Feb 01 14:52:41 compute-0 busy_allen[98397]:     "mfa_ids": [],
Feb 01 14:52:41 compute-0 busy_allen[98397]:     "account_id": "",
Feb 01 14:52:41 compute-0 busy_allen[98397]:     "path": "/",
Feb 01 14:52:41 compute-0 busy_allen[98397]:     "create_date": "2026-02-01T14:52:41.859284Z",
Feb 01 14:52:41 compute-0 busy_allen[98397]:     "tags": [],
Feb 01 14:52:41 compute-0 busy_allen[98397]:     "group_ids": []
Feb 01 14:52:41 compute-0 busy_allen[98397]: }
Feb 01 14:52:41 compute-0 busy_allen[98397]: 
Feb 01 14:52:41 compute-0 systemd[1]: libpod-29bc7e957b9d93144dea148870c1b2cfac7acc459b11d571c7eea900c9e39bd6.scope: Deactivated successfully.
Feb 01 14:52:41 compute-0 podman[98483]: 2026-02-01 14:52:41.952422865 +0000 UTC m=+0.037532109 container died 29bc7e957b9d93144dea148870c1b2cfac7acc459b11d571c7eea900c9e39bd6 (image=quay.io/ceph/ceph:v20, name=busy_allen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Feb 01 14:52:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-222832f23944c2d629283b86c55eeebf42691ba138670fd4d903b14c0f5dabd4-merged.mount: Deactivated successfully.
Feb 01 14:52:41 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Feb 01 14:52:41 compute-0 podman[98483]: 2026-02-01 14:52:41.996748275 +0000 UTC m=+0.081857449 container remove 29bc7e957b9d93144dea148870c1b2cfac7acc459b11d571c7eea900c9e39bd6 (image=quay.io/ceph/ceph:v20, name=busy_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 01 14:52:42 compute-0 systemd[1]: libpod-conmon-29bc7e957b9d93144dea148870c1b2cfac7acc459b11d571c7eea900c9e39bd6.scope: Deactivated successfully.
Feb 01 14:52:42 compute-0 sudo[98379]: pam_unix(sudo:session): session closed for user root
Feb 01 14:52:42 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Feb 01 14:52:42 compute-0 ceph-mon[75179]: 4.1d scrub starts
Feb 01 14:52:42 compute-0 ceph-mon[75179]: 4.1d scrub ok
Feb 01 14:52:42 compute-0 ceph-mon[75179]: 8.16 scrub starts
Feb 01 14:52:42 compute-0 ceph-mon[75179]: 8.16 scrub ok
Feb 01 14:52:42 compute-0 ceph-mon[75179]: osdmap e57: 3 total, 3 up, 3 in
Feb 01 14:52:42 compute-0 ceph-mon[75179]: 10.1f scrub starts
Feb 01 14:52:42 compute-0 ceph-mon[75179]: 10.1f scrub ok
Feb 01 14:52:42 compute-0 ceph-mon[75179]: pgmap v120: 305 pgs: 15 peering, 290 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 12 op/s; 1.5 KiB/s, 2 keys/s, 30 objects/s recovering
Feb 01 14:52:43 compute-0 ceph-mon[75179]: 3.1c scrub starts
Feb 01 14:52:43 compute-0 ceph-mon[75179]: 3.1c scrub ok
Feb 01 14:52:43 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Feb 01 14:52:43 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Feb 01 14:52:43 compute-0 ceph-mgr[75469]: [progress INFO root] Writing back 17 completed events
Feb 01 14:52:43 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 01 14:52:43 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:52:43 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Feb 01 14:52:43 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Feb 01 14:52:43 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v121: 305 pgs: 15 peering, 290 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 8 op/s; 1.1 KiB/s, 1 keys/s, 21 objects/s recovering
Feb 01 14:52:44 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Feb 01 14:52:44 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Feb 01 14:52:44 compute-0 ceph-mon[75179]: 5.10 scrub starts
Feb 01 14:52:44 compute-0 ceph-mon[75179]: 5.10 scrub ok
Feb 01 14:52:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:52:44 compute-0 ceph-mon[75179]: 4.19 scrub starts
Feb 01 14:52:44 compute-0 ceph-mon[75179]: 4.19 scrub ok
Feb 01 14:52:44 compute-0 ceph-mon[75179]: pgmap v121: 305 pgs: 15 peering, 290 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 8 op/s; 1.1 KiB/s, 1 keys/s, 21 objects/s recovering
Feb 01 14:52:44 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Feb 01 14:52:44 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Feb 01 14:52:45 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Feb 01 14:52:45 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Feb 01 14:52:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:52:45 compute-0 ceph-mon[75179]: 10.1d scrub starts
Feb 01 14:52:45 compute-0 ceph-mon[75179]: 10.1d scrub ok
Feb 01 14:52:45 compute-0 ceph-mon[75179]: 4.3 scrub starts
Feb 01 14:52:45 compute-0 ceph-mon[75179]: 4.3 scrub ok
Feb 01 14:52:45 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Feb 01 14:52:45 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Feb 01 14:52:45 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v122: 305 pgs: 15 peering, 290 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 309 B/s wr, 27 op/s; 942 B/s, 1 keys/s, 18 objects/s recovering
Feb 01 14:52:46 compute-0 ceph-mon[75179]: 10.1c scrub starts
Feb 01 14:52:46 compute-0 ceph-mon[75179]: 10.1c scrub ok
Feb 01 14:52:46 compute-0 ceph-mon[75179]: 4.0 scrub starts
Feb 01 14:52:46 compute-0 ceph-mon[75179]: 4.0 scrub ok
Feb 01 14:52:46 compute-0 ceph-mon[75179]: pgmap v122: 305 pgs: 15 peering, 290 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 309 B/s wr, 27 op/s; 942 B/s, 1 keys/s, 18 objects/s recovering
Feb 01 14:52:47 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.c scrub starts
Feb 01 14:52:47 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.c scrub ok
Feb 01 14:52:47 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v123: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 255 B/s wr, 35 op/s; 861 B/s, 2 keys/s, 16 objects/s recovering
Feb 01 14:52:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Feb 01 14:52:47 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb 01 14:52:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Feb 01 14:52:47 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb 01 14:52:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Feb 01 14:52:47 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Feb 01 14:52:47 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Feb 01 14:52:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Feb 01 14:52:47 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb 01 14:52:47 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb 01 14:52:47 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 58 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=9.117376328s) [1] r=-1 lpr=58 pi=[44,58)/1 crt=32'39 lcod 0'0 active pruub 115.539176941s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:47 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 58 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=9.117288589s) [1] r=-1 lpr=58 pi=[44,58)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 115.539176941s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:47 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 58 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=9.116833687s) [1] r=-1 lpr=58 pi=[44,58)/1 crt=32'39 lcod 0'0 active pruub 115.539482117s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:47 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 58 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=58 pruub=9.116793633s) [1] r=-1 lpr=58 pi=[44,58)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 115.539482117s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:47 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Feb 01 14:52:47 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 58 pg[6.4( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:47 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 58 pg[6.c( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:47 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Feb 01 14:52:47 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Feb 01 14:52:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:52:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:52:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:52:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:52:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:52:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:52:48 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Feb 01 14:52:48 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Feb 01 14:52:48 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Feb 01 14:52:48 compute-0 ceph-mon[75179]: 4.c scrub starts
Feb 01 14:52:48 compute-0 ceph-mon[75179]: 4.c scrub ok
Feb 01 14:52:48 compute-0 ceph-mon[75179]: pgmap v123: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 255 B/s wr, 35 op/s; 861 B/s, 2 keys/s, 16 objects/s recovering
Feb 01 14:52:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Feb 01 14:52:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Feb 01 14:52:48 compute-0 ceph-mon[75179]: osdmap e58: 3 total, 3 up, 3 in
Feb 01 14:52:48 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 59 pg[6.4( v 32'39 lc 31'15 (0'0,32'39] local-lis/les=58/59 n=2 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:48 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 59 pg[6.c( v 32'39 lc 31'17 (0'0,32'39] local-lis/les=58/59 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=58) [1] r=0 lpr=58 pi=[44,58)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:49 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v126: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 255 B/s wr, 29 op/s; 80 B/s, 1 keys/s, 0 objects/s recovering
Feb 01 14:52:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Feb 01 14:52:49 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb 01 14:52:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Feb 01 14:52:49 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb 01 14:52:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Feb 01 14:52:49 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Feb 01 14:52:49 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Feb 01 14:52:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Feb 01 14:52:49 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Feb 01 14:52:49 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 60 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=52/53 n=2 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=11.297952652s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=32'39 active pruub 116.240661621s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:49 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=11.297438622s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=32'39 active pruub 116.240341187s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:49 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=11.297299385s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=32'39 unknown NOTIFY pruub 116.240341187s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:49 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 60 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=52/53 n=2 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=11.297692299s) [0] r=-1 lpr=60 pi=[52,60)/1 crt=32'39 unknown NOTIFY pruub 116.240661621s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:49 compute-0 ceph-mon[75179]: 8.17 scrub starts
Feb 01 14:52:49 compute-0 ceph-mon[75179]: 8.17 scrub ok
Feb 01 14:52:49 compute-0 ceph-mon[75179]: osdmap e59: 3 total, 3 up, 3 in
Feb 01 14:52:49 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb 01 14:52:49 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:49 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb 01 14:52:49 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 60 pg[6.5( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:52:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Feb 01 14:52:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Feb 01 14:52:50 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Feb 01 14:52:50 compute-0 ceph-mon[75179]: pgmap v126: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 255 B/s wr, 29 op/s; 80 B/s, 1 keys/s, 0 objects/s recovering
Feb 01 14:52:50 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Feb 01 14:52:50 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Feb 01 14:52:50 compute-0 ceph-mon[75179]: osdmap e60: 3 total, 3 up, 3 in
Feb 01 14:52:50 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 61 pg[6.5( v 32'39 lc 31'11 (0'0,32'39] local-lis/les=60/61 n=2 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:50 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 61 pg[6.d( v 32'39 lc 31'13 (0'0,32'39] local-lis/les=60/61 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=60) [0] r=0 lpr=60 pi=[52,60)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:51 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Feb 01 14:52:51 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Feb 01 14:52:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Feb 01 14:52:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Feb 01 14:52:51 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v129: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 445 B/s, 2 objects/s recovering
Feb 01 14:52:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Feb 01 14:52:51 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb 01 14:52:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Feb 01 14:52:51 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb 01 14:52:51 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Feb 01 14:52:51 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Feb 01 14:52:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Feb 01 14:52:51 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Feb 01 14:52:51 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Feb 01 14:52:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Feb 01 14:52:51 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Feb 01 14:52:51 compute-0 ceph-mon[75179]: osdmap e61: 3 total, 3 up, 3 in
Feb 01 14:52:51 compute-0 ceph-mon[75179]: 10.1b scrub starts
Feb 01 14:52:51 compute-0 ceph-mon[75179]: 10.1b scrub ok
Feb 01 14:52:51 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb 01 14:52:51 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb 01 14:52:51 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Feb 01 14:52:52 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Feb 01 14:52:52 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Feb 01 14:52:52 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Feb 01 14:52:52 compute-0 ceph-mon[75179]: 11.13 scrub starts
Feb 01 14:52:52 compute-0 ceph-mon[75179]: 11.13 scrub ok
Feb 01 14:52:52 compute-0 ceph-mon[75179]: pgmap v129: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 445 B/s, 2 objects/s recovering
Feb 01 14:52:52 compute-0 ceph-mon[75179]: 4.15 scrub starts
Feb 01 14:52:52 compute-0 ceph-mon[75179]: 4.15 scrub ok
Feb 01 14:52:52 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Feb 01 14:52:52 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Feb 01 14:52:52 compute-0 ceph-mon[75179]: osdmap e62: 3 total, 3 up, 3 in
Feb 01 14:52:52 compute-0 ceph-mon[75179]: 2.12 scrub starts
Feb 01 14:52:52 compute-0 ceph-mon[75179]: 2.12 scrub ok
Feb 01 14:52:53 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 62 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.529041290s) [2] r=-1 lpr=62 pi=[48,62)/1 crt=38'483 lcod 0'0 active pruub 124.060974121s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:53 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 62 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.528965950s) [2] r=-1 lpr=62 pi=[48,62)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 124.060974121s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:53 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 62 pg[9.e( v 57'489 (0'0,57'489] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.529239655s) [2] r=-1 lpr=62 pi=[48,62)/1 crt=57'488 lcod 57'488 active pruub 124.062179565s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:53 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 62 pg[9.e( v 57'489 (0'0,57'489] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.529177666s) [2] r=-1 lpr=62 pi=[48,62)/1 crt=57'488 lcod 57'488 unknown NOTIFY pruub 124.062179565s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:53 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62) [2] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:53 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 62 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.531764984s) [2] r=-1 lpr=62 pi=[48,62)/1 crt=38'483 lcod 0'0 active pruub 124.065437317s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:53 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 62 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.531714439s) [2] r=-1 lpr=62 pi=[48,62)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 124.065437317s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:53 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 62 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.531764030s) [2] r=-1 lpr=62 pi=[48,62)/1 crt=56'484 lcod 56'484 active pruub 124.065750122s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:53 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 62 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62 pruub=15.531688690s) [2] r=-1 lpr=62 pi=[48,62)/1 crt=56'484 lcod 56'484 unknown NOTIFY pruub 124.065750122s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:53 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62) [2] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:53 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62) [2] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:53 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=62) [2] r=0 lpr=62 pi=[48,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:53 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v131: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 361 B/s, 1 objects/s recovering
Feb 01 14:52:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Feb 01 14:52:53 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb 01 14:52:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Feb 01 14:52:53 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb 01 14:52:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Feb 01 14:52:53 compute-0 ceph-mon[75179]: 7.1e scrub starts
Feb 01 14:52:53 compute-0 ceph-mon[75179]: 7.1e scrub ok
Feb 01 14:52:53 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb 01 14:52:53 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb 01 14:52:53 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Feb 01 14:52:53 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Feb 01 14:52:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Feb 01 14:52:53 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Feb 01 14:52:53 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[48,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:53 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[48,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:53 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[48,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:53 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[48,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:53 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.6( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[48,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:53 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.6( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[48,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:53 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[48,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:53 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=-1 lpr=63 pi=[48,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:53 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 63 pg[9.e( v 57'489 (0'0,57'489] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=0 lpr=63 pi=[48,63)/1 crt=57'488 lcod 57'488 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:53 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 63 pg[9.e( v 57'489 (0'0,57'489] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=0 lpr=63 pi=[48,63)/1 crt=57'488 lcod 57'488 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:53 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 63 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=0 lpr=63 pi=[48,63)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:53 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 63 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=0 lpr=63 pi=[48,63)/1 crt=56'484 lcod 56'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:53 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 63 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=0 lpr=63 pi=[48,63)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:53 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 63 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=0 lpr=63 pi=[48,63)/1 crt=56'484 lcod 56'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:53 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 63 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=0 lpr=63 pi=[48,63)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:53 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 63 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] r=0 lpr=63 pi=[48,63)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:54 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 63 pg[9.7( v 57'487 (0'0,57'487] local-lis/les=56/57 n=7 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=10.928535461s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=57'486 lcod 57'486 active pruub 123.753288269s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:54 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 63 pg[9.17( v 57'485 (0'0,57'485] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=10.928648949s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=57'484 lcod 57'484 active pruub 123.753433228s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:54 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 63 pg[9.17( v 57'485 (0'0,57'485] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=10.928591728s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=57'484 lcod 57'484 unknown NOTIFY pruub 123.753433228s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:54 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 63 pg[9.7( v 57'487 (0'0,57'487] local-lis/les=56/57 n=7 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=10.928303719s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=57'486 lcod 57'486 unknown NOTIFY pruub 123.753288269s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:54 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 63 pg[9.f( v 57'485 (0'0,57'485] local-lis/les=56/57 n=7 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=10.928470612s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=57'484 lcod 57'484 active pruub 123.753669739s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:54 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 63 pg[9.f( v 57'485 (0'0,57'485] local-lis/les=56/57 n=7 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=10.928447723s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=57'484 lcod 57'484 unknown NOTIFY pruub 123.753669739s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:54 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 63 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=10.928160667s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=38'483 active pruub 123.753845215s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:54 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 63 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=10.928115845s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=38'483 unknown NOTIFY pruub 123.753845215s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:54 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:54 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:54 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:54 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:54 compute-0 sshd-session[98498]: Accepted publickey for zuul from 192.168.122.30 port 42726 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:52:54 compute-0 systemd-logind[786]: New session 33 of user zuul.
Feb 01 14:52:54 compute-0 systemd[1]: Started Session 33 of User zuul.
Feb 01 14:52:54 compute-0 sshd-session[98498]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:52:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Feb 01 14:52:54 compute-0 ceph-mon[75179]: pgmap v131: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 361 B/s, 1 objects/s recovering
Feb 01 14:52:54 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Feb 01 14:52:54 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Feb 01 14:52:54 compute-0 ceph-mon[75179]: osdmap e63: 3 total, 3 up, 3 in
Feb 01 14:52:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Feb 01 14:52:54 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Feb 01 14:52:54 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 64 pg[9.7( v 57'487 (0'0,57'487] local-lis/les=56/57 n=7 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=0 lpr=64 pi=[56,64)/1 crt=57'486 lcod 57'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:54 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 64 pg[9.7( v 57'487 (0'0,57'487] local-lis/les=56/57 n=7 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=0 lpr=64 pi=[56,64)/1 crt=57'486 lcod 57'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:54 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 64 pg[9.17( v 57'485 (0'0,57'485] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=0 lpr=64 pi=[56,64)/1 crt=57'484 lcod 57'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:54 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 64 pg[9.17( v 57'485 (0'0,57'485] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=0 lpr=64 pi=[56,64)/1 crt=57'484 lcod 57'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:54 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 64 pg[9.f( v 57'485 (0'0,57'485] local-lis/les=56/57 n=7 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=0 lpr=64 pi=[56,64)/1 crt=57'484 lcod 57'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:54 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 64 pg[9.f( v 57'485 (0'0,57'485] local-lis/les=56/57 n=7 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=0 lpr=64 pi=[56,64)/1 crt=57'484 lcod 57'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:54 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 64 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=0 lpr=64 pi=[56,64)/1 crt=38'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:54 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 64 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=0 lpr=64 pi=[56,64)/1 crt=38'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:54 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:54 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:54 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:54 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:54 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:54 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:54 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:54 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[56,64)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:54 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 64 pg[9.e( v 57'489 (0'0,57'489] local-lis/les=63/64 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[48,63)/1 crt=57'489 lcod 57'488 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:54 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 64 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=63/64 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[48,63)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:54 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 64 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=63/64 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[48,63)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:54 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 64 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=63/64 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=63) [2]/[1] async=[2] r=0 lpr=63 pi=[48,63)/1 crt=57'485 lcod 56'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:52:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Feb 01 14:52:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Feb 01 14:52:55 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Feb 01 14:52:55 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 65 pg[9.e( v 57'489 (0'0,57'489] local-lis/les=0/0 n=7 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 pct=0'0 crt=57'489 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:55 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 65 pg[9.e( v 57'489 (0'0,57'489] local-lis/les=0/0 n=7 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=57'489 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:55 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 65 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:55 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 65 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:55 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 65 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:55 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 65 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:55 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 65 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 pct=0'0 crt=57'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:55 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 65 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=57'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 65 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=63/64 n=6 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65 pruub=15.497686386s) [2] async=[2] r=-1 lpr=65 pi=[48,65)/1 crt=57'485 lcod 56'484 active pruub 126.046806335s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 65 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=63/64 n=7 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65 pruub=15.497288704s) [2] async=[2] r=-1 lpr=65 pi=[48,65)/1 crt=38'483 lcod 0'0 active pruub 126.046455383s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 65 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=63/64 n=7 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65 pruub=15.497209549s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 126.046455383s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 65 pg[9.e( v 57'489 (0'0,57'489] local-lis/les=63/64 n=7 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65 pruub=15.497092247s) [2] async=[2] r=-1 lpr=65 pi=[48,65)/1 crt=57'489 lcod 57'488 active pruub 126.046386719s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 65 pg[9.e( v 57'489 (0'0,57'489] local-lis/les=63/64 n=7 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65 pruub=15.496965408s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=57'489 lcod 57'488 unknown NOTIFY pruub 126.046386719s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 65 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=63/64 n=6 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65 pruub=15.496891022s) [2] async=[2] r=-1 lpr=65 pi=[48,65)/1 crt=38'483 lcod 0'0 active pruub 126.046516418s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 65 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=63/64 n=6 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65 pruub=15.496793747s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 126.046516418s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 65 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=63/64 n=6 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65 pruub=15.495891571s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=57'485 lcod 56'484 unknown NOTIFY pruub 126.046806335s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:55 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 65 pg[9.7( v 57'487 (0'0,57'487] local-lis/les=64/65 n=7 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[56,64)/1 crt=57'487 lcod 57'486 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:55 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 65 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=64/65 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[56,64)/1 crt=38'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:55 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 65 pg[9.17( v 57'485 (0'0,57'485] local-lis/les=64/65 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[56,64)/1 crt=57'485 lcod 57'484 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:55 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 65 pg[9.f( v 57'485 (0'0,57'485] local-lis/les=64/65 n=7 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[56,64)/1 crt=57'485 lcod 57'484 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:55 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v135: 305 pgs: 4 peering, 301 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 207 B/s, 5 objects/s recovering
Feb 01 14:52:55 compute-0 python3.9[98651]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:52:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Feb 01 14:52:55 compute-0 ceph-mon[75179]: osdmap e64: 3 total, 3 up, 3 in
Feb 01 14:52:55 compute-0 ceph-mon[75179]: osdmap e65: 3 total, 3 up, 3 in
Feb 01 14:52:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Feb 01 14:52:56 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Feb 01 14:52:56 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Feb 01 14:52:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Feb 01 14:52:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Feb 01 14:52:56 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Feb 01 14:52:56 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.f( v 57'485 (0'0,57'485] local-lis/les=0/0 n=7 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 pct=0'0 crt=57'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:56 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.17( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 pct=0'0 crt=57'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:56 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.f( v 57'485 (0'0,57'485] local-lis/les=0/0 n=7 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=57'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:56 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 66 pg[9.7( v 57'487 (0'0,57'487] local-lis/les=64/65 n=7 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=14.995784760s) [2] async=[2] r=-1 lpr=66 pi=[56,66)/1 crt=57'487 lcod 57'486 active pruub 130.059646606s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:56 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 66 pg[9.7( v 57'487 (0'0,57'487] local-lis/les=64/65 n=7 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=14.995539665s) [2] r=-1 lpr=66 pi=[56,66)/1 crt=57'487 lcod 57'486 unknown NOTIFY pruub 130.059646606s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:56 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 66 pg[9.17( v 57'485 (0'0,57'485] local-lis/les=64/65 n=6 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=14.995609283s) [2] async=[2] r=-1 lpr=66 pi=[56,66)/1 crt=57'485 lcod 57'484 active pruub 130.059814453s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:56 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 66 pg[9.17( v 57'485 (0'0,57'485] local-lis/les=64/65 n=6 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=14.995502472s) [2] r=-1 lpr=66 pi=[56,66)/1 crt=57'485 lcod 57'484 unknown NOTIFY pruub 130.059814453s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:56 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 66 pg[9.f( v 57'485 (0'0,57'485] local-lis/les=64/65 n=7 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=14.995087624s) [2] async=[2] r=-1 lpr=66 pi=[56,66)/1 crt=57'485 lcod 57'484 active pruub 130.059875488s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:56 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 66 pg[9.f( v 57'485 (0'0,57'485] local-lis/les=64/65 n=7 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=14.994912148s) [2] r=-1 lpr=66 pi=[56,66)/1 crt=57'485 lcod 57'484 unknown NOTIFY pruub 130.059875488s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:56 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 66 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=64/65 n=6 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=14.994610786s) [2] async=[2] r=-1 lpr=66 pi=[56,66)/1 crt=38'483 active pruub 130.059753418s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:56 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 66 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=64/65 n=6 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=14.994564056s) [2] r=-1 lpr=66 pi=[56,66)/1 crt=38'483 unknown NOTIFY pruub 130.059753418s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:52:56 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.7( v 57'487 (0'0,57'487] local-lis/les=0/0 n=7 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 pct=0'0 crt=57'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:56 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.7( v 57'487 (0'0,57'487] local-lis/les=0/0 n=7 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=57'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:56 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.17( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=57'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:56 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:52:56 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:52:56 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=65/66 n=6 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:56 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.6( v 38'483 (0'0,38'483] local-lis/les=65/66 n=7 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:56 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.e( v 57'489 (0'0,57'489] local-lis/les=65/66 n=7 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=57'489 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:56 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 66 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=63/48 les/c/f=64/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=57'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:56 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Feb 01 14:52:56 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Feb 01 14:52:56 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Feb 01 14:52:56 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Feb 01 14:52:56 compute-0 ceph-mon[75179]: pgmap v135: 305 pgs: 4 peering, 301 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 207 B/s, 5 objects/s recovering
Feb 01 14:52:56 compute-0 ceph-mon[75179]: 3.19 scrub starts
Feb 01 14:52:56 compute-0 ceph-mon[75179]: 2.14 scrub starts
Feb 01 14:52:56 compute-0 ceph-mon[75179]: 2.14 scrub ok
Feb 01 14:52:56 compute-0 ceph-mon[75179]: osdmap e66: 3 total, 3 up, 3 in
Feb 01 14:52:57 compute-0 sudo[98867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iugchxtxcjlchzltfhdjdfnkgfiynhym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957576.760922-27-111947110178526/AnsiballZ_command.py'
Feb 01 14:52:57 compute-0 sudo[98867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:52:57 compute-0 python3.9[98869]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:52:57 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Feb 01 14:52:57 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Feb 01 14:52:57 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Feb 01 14:52:57 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 67 pg[9.17( v 57'485 (0'0,57'485] local-lis/les=66/67 n=6 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=57'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:57 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 67 pg[9.f( v 57'485 (0'0,57'485] local-lis/les=66/67 n=7 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=57'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:57 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 67 pg[9.7( v 57'487 (0'0,57'487] local-lis/les=66/67 n=7 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=57'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:57 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 67 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=64/56 les/c/f=65/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:52:57 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v138: 305 pgs: 4 active+remapped, 4 peering, 297 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 519 B/s, 12 objects/s recovering
Feb 01 14:52:57 compute-0 ceph-mon[75179]: 3.19 scrub ok
Feb 01 14:52:57 compute-0 ceph-mon[75179]: 4.16 scrub starts
Feb 01 14:52:57 compute-0 ceph-mon[75179]: 4.16 scrub ok
Feb 01 14:52:57 compute-0 ceph-mon[75179]: 7.1d scrub starts
Feb 01 14:52:57 compute-0 ceph-mon[75179]: 7.1d scrub ok
Feb 01 14:52:57 compute-0 ceph-mon[75179]: osdmap e67: 3 total, 3 up, 3 in
Feb 01 14:52:58 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Feb 01 14:52:58 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Feb 01 14:52:58 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Feb 01 14:52:58 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Feb 01 14:52:58 compute-0 ceph-mon[75179]: pgmap v138: 305 pgs: 4 active+remapped, 4 peering, 297 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 519 B/s, 12 objects/s recovering
Feb 01 14:52:58 compute-0 ceph-mon[75179]: 8.13 scrub starts
Feb 01 14:52:58 compute-0 ceph-mon[75179]: 8.13 scrub ok
Feb 01 14:52:59 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v139: 305 pgs: 4 active+remapped, 4 peering, 297 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 413 B/s, 9 objects/s recovering
Feb 01 14:52:59 compute-0 ceph-mon[75179]: 4.17 scrub starts
Feb 01 14:52:59 compute-0 ceph-mon[75179]: 4.17 scrub ok
Feb 01 14:53:00 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Feb 01 14:53:00 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Feb 01 14:53:00 compute-0 sudo[98883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:53:00 compute-0 sudo[98883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:53:00 compute-0 sudo[98883]: pam_unix(sudo:session): session closed for user root
Feb 01 14:53:00 compute-0 sudo[98908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 14:53:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:53:00 compute-0 sudo[98908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:53:00 compute-0 sudo[98908]: pam_unix(sudo:session): session closed for user root
Feb 01 14:53:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:53:00 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:53:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 14:53:00 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:53:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 14:53:00 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:53:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 14:53:00 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:53:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 14:53:00 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:53:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:53:00 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:53:00 compute-0 sudo[98970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:53:00 compute-0 sudo[98970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:53:00 compute-0 sudo[98970]: pam_unix(sudo:session): session closed for user root
Feb 01 14:53:00 compute-0 sudo[98995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 14:53:00 compute-0 sudo[98995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:53:00 compute-0 ceph-mon[75179]: pgmap v139: 305 pgs: 4 active+remapped, 4 peering, 297 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 413 B/s, 9 objects/s recovering
Feb 01 14:53:00 compute-0 ceph-mon[75179]: 10.18 scrub starts
Feb 01 14:53:00 compute-0 ceph-mon[75179]: 10.18 scrub ok
Feb 01 14:53:00 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:53:00 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:53:00 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:53:00 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:53:00 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:53:00 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:53:01 compute-0 podman[99035]: 2026-02-01 14:53:01.161779873 +0000 UTC m=+0.050160849 container create c4af9bde0bb686a0afab6a479b6cea9e5dca16f262669ca70ccc8d01597d5110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_booth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:53:01 compute-0 systemd[1]: Started libpod-conmon-c4af9bde0bb686a0afab6a479b6cea9e5dca16f262669ca70ccc8d01597d5110.scope.
Feb 01 14:53:01 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:53:01 compute-0 podman[99035]: 2026-02-01 14:53:01.140780168 +0000 UTC m=+0.029161234 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:53:01 compute-0 podman[99035]: 2026-02-01 14:53:01.240412337 +0000 UTC m=+0.128793333 container init c4af9bde0bb686a0afab6a479b6cea9e5dca16f262669ca70ccc8d01597d5110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 01 14:53:01 compute-0 podman[99035]: 2026-02-01 14:53:01.247509451 +0000 UTC m=+0.135890457 container start c4af9bde0bb686a0afab6a479b6cea9e5dca16f262669ca70ccc8d01597d5110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:53:01 compute-0 podman[99035]: 2026-02-01 14:53:01.251171085 +0000 UTC m=+0.139552061 container attach c4af9bde0bb686a0afab6a479b6cea9e5dca16f262669ca70ccc8d01597d5110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_booth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Feb 01 14:53:01 compute-0 vibrant_booth[99054]: 167 167
Feb 01 14:53:01 compute-0 systemd[1]: libpod-c4af9bde0bb686a0afab6a479b6cea9e5dca16f262669ca70ccc8d01597d5110.scope: Deactivated successfully.
Feb 01 14:53:01 compute-0 podman[99035]: 2026-02-01 14:53:01.254026001 +0000 UTC m=+0.142406977 container died c4af9bde0bb686a0afab6a479b6cea9e5dca16f262669ca70ccc8d01597d5110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_booth, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:53:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-47a2575bf4a52dcbe3db78aaf9b1dd258ffa549b31e9af8fb02cfe859d503eaa-merged.mount: Deactivated successfully.
Feb 01 14:53:01 compute-0 podman[99035]: 2026-02-01 14:53:01.303616116 +0000 UTC m=+0.191997092 container remove c4af9bde0bb686a0afab6a479b6cea9e5dca16f262669ca70ccc8d01597d5110 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_booth, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:53:01 compute-0 systemd[1]: libpod-conmon-c4af9bde0bb686a0afab6a479b6cea9e5dca16f262669ca70ccc8d01597d5110.scope: Deactivated successfully.
Feb 01 14:53:01 compute-0 podman[99079]: 2026-02-01 14:53:01.441337203 +0000 UTC m=+0.047729442 container create f20fec54c46a93dc1c3b4065e014e98ee304d709a34a13eea74a1a603f1b4b87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:53:01 compute-0 systemd[1]: Started libpod-conmon-f20fec54c46a93dc1c3b4065e014e98ee304d709a34a13eea74a1a603f1b4b87.scope.
Feb 01 14:53:01 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:53:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735bce9c2cf774df9ef027a6909ec50fcb3170edaa4484679fddba6428684f3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:53:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735bce9c2cf774df9ef027a6909ec50fcb3170edaa4484679fddba6428684f3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:53:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735bce9c2cf774df9ef027a6909ec50fcb3170edaa4484679fddba6428684f3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:53:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735bce9c2cf774df9ef027a6909ec50fcb3170edaa4484679fddba6428684f3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:53:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735bce9c2cf774df9ef027a6909ec50fcb3170edaa4484679fddba6428684f3c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:53:01 compute-0 podman[99079]: 2026-02-01 14:53:01.426684155 +0000 UTC m=+0.033076414 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:53:01 compute-0 podman[99079]: 2026-02-01 14:53:01.542199011 +0000 UTC m=+0.148591300 container init f20fec54c46a93dc1c3b4065e014e98ee304d709a34a13eea74a1a603f1b4b87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_sutherland, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:53:01 compute-0 podman[99079]: 2026-02-01 14:53:01.558922237 +0000 UTC m=+0.165314476 container start f20fec54c46a93dc1c3b4065e014e98ee304d709a34a13eea74a1a603f1b4b87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_sutherland, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:53:01 compute-0 podman[99079]: 2026-02-01 14:53:01.562220243 +0000 UTC m=+0.168612582 container attach f20fec54c46a93dc1c3b4065e014e98ee304d709a34a13eea74a1a603f1b4b87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:53:01 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v140: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 316 B/s, 7 objects/s recovering
Feb 01 14:53:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Feb 01 14:53:01 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb 01 14:53:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Feb 01 14:53:01 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb 01 14:53:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Feb 01 14:53:01 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb 01 14:53:01 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb 01 14:53:01 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Feb 01 14:53:01 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Feb 01 14:53:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Feb 01 14:53:01 compute-0 magical_sutherland[99096]: --> passed data devices: 0 physical, 3 LVM
Feb 01 14:53:01 compute-0 magical_sutherland[99096]: --> All data devices are unavailable
Feb 01 14:53:01 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 68 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=68 pruub=14.963678360s) [2] r=-1 lpr=68 pi=[48,68)/1 crt=38'483 lcod 0'0 active pruub 132.062423706s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:01 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 68 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=68 pruub=14.963602066s) [2] r=-1 lpr=68 pi=[48,68)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 132.062423706s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:01 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 68 pg[9.18( v 57'487 (0'0,57'487] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=68 pruub=14.966928482s) [2] r=-1 lpr=68 pi=[48,68)/1 crt=57'486 lcod 57'486 active pruub 132.065948486s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:01 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 68 pg[9.18( v 57'487 (0'0,57'487] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=68 pruub=14.966861725s) [2] r=-1 lpr=68 pi=[48,68)/1 crt=57'486 lcod 57'486 unknown NOTIFY pruub 132.065948486s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:01 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Feb 01 14:53:01 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:01 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:02 compute-0 podman[99079]: 2026-02-01 14:53:02.014464318 +0000 UTC m=+0.620856557 container died f20fec54c46a93dc1c3b4065e014e98ee304d709a34a13eea74a1a603f1b4b87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Feb 01 14:53:02 compute-0 systemd[1]: libpod-f20fec54c46a93dc1c3b4065e014e98ee304d709a34a13eea74a1a603f1b4b87.scope: Deactivated successfully.
Feb 01 14:53:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-735bce9c2cf774df9ef027a6909ec50fcb3170edaa4484679fddba6428684f3c-merged.mount: Deactivated successfully.
Feb 01 14:53:02 compute-0 podman[99079]: 2026-02-01 14:53:02.052380293 +0000 UTC m=+0.658772532 container remove f20fec54c46a93dc1c3b4065e014e98ee304d709a34a13eea74a1a603f1b4b87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:53:02 compute-0 systemd[1]: libpod-conmon-f20fec54c46a93dc1c3b4065e014e98ee304d709a34a13eea74a1a603f1b4b87.scope: Deactivated successfully.
Feb 01 14:53:02 compute-0 sudo[98995]: pam_unix(sudo:session): session closed for user root
Feb 01 14:53:02 compute-0 sudo[99127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:53:02 compute-0 sudo[99127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:53:02 compute-0 sudo[99127]: pam_unix(sudo:session): session closed for user root
Feb 01 14:53:02 compute-0 sudo[99152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 14:53:02 compute-0 sudo[99152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:53:02 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 68 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=68 pruub=10.513087273s) [2] r=-1 lpr=68 pi=[44,68)/1 crt=32'39 lcod 0'0 active pruub 131.539382935s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:02 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 68 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=68 pruub=10.513002396s) [2] r=-1 lpr=68 pi=[44,68)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 131.539382935s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:02 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 68 pg[6.8( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=68) [2] r=0 lpr=68 pi=[44,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:02 compute-0 podman[99189]: 2026-02-01 14:53:02.532138553 +0000 UTC m=+0.067314094 container create 74027cc61e818bc9f17b0e289564259b2643069d8fcec95a86a3a5c8a159a86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:53:02 compute-0 systemd[1]: Started libpod-conmon-74027cc61e818bc9f17b0e289564259b2643069d8fcec95a86a3a5c8a159a86a.scope.
Feb 01 14:53:02 compute-0 podman[99189]: 2026-02-01 14:53:02.499880569 +0000 UTC m=+0.035056200 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:53:02 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:53:02 compute-0 podman[99189]: 2026-02-01 14:53:02.62257158 +0000 UTC m=+0.157747141 container init 74027cc61e818bc9f17b0e289564259b2643069d8fcec95a86a3a5c8a159a86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:53:02 compute-0 podman[99189]: 2026-02-01 14:53:02.629646553 +0000 UTC m=+0.164822124 container start 74027cc61e818bc9f17b0e289564259b2643069d8fcec95a86a3a5c8a159a86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:53:02 compute-0 podman[99189]: 2026-02-01 14:53:02.632576331 +0000 UTC m=+0.167751902 container attach 74027cc61e818bc9f17b0e289564259b2643069d8fcec95a86a3a5c8a159a86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:53:02 compute-0 stupefied_saha[99206]: 167 167
Feb 01 14:53:02 compute-0 systemd[1]: libpod-74027cc61e818bc9f17b0e289564259b2643069d8fcec95a86a3a5c8a159a86a.scope: Deactivated successfully.
Feb 01 14:53:02 compute-0 conmon[99206]: conmon 74027cc61e818bc9f17b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-74027cc61e818bc9f17b0e289564259b2643069d8fcec95a86a3a5c8a159a86a.scope/container/memory.events
Feb 01 14:53:02 compute-0 podman[99189]: 2026-02-01 14:53:02.636765117 +0000 UTC m=+0.171940678 container died 74027cc61e818bc9f17b0e289564259b2643069d8fcec95a86a3a5c8a159a86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_saha, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 01 14:53:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-751ea7f5cb9a90b1e009f3f8d973e0af11a2f34e28f206c7e151f025f5aee037-merged.mount: Deactivated successfully.
Feb 01 14:53:02 compute-0 podman[99189]: 2026-02-01 14:53:02.683172208 +0000 UTC m=+0.218347749 container remove 74027cc61e818bc9f17b0e289564259b2643069d8fcec95a86a3a5c8a159a86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_saha, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:53:02 compute-0 systemd[1]: libpod-conmon-74027cc61e818bc9f17b0e289564259b2643069d8fcec95a86a3a5c8a159a86a.scope: Deactivated successfully.
Feb 01 14:53:02 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Feb 01 14:53:02 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Feb 01 14:53:02 compute-0 podman[99230]: 2026-02-01 14:53:02.824692374 +0000 UTC m=+0.040371633 container create 68d80571a00a2a910a4e45609edb0ab97d2aa9d28a38330c4144c9d5faf2c6e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_beaver, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 01 14:53:02 compute-0 systemd[1]: Started libpod-conmon-68d80571a00a2a910a4e45609edb0ab97d2aa9d28a38330c4144c9d5faf2c6e9.scope.
Feb 01 14:53:02 compute-0 podman[99230]: 2026-02-01 14:53:02.8076324 +0000 UTC m=+0.023311639 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:53:02 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e258c02ff5051181ba60341e405f9816f8dd45feb283fab6812cfe6f273b49b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e258c02ff5051181ba60341e405f9816f8dd45feb283fab6812cfe6f273b49b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e258c02ff5051181ba60341e405f9816f8dd45feb283fab6812cfe6f273b49b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e258c02ff5051181ba60341e405f9816f8dd45feb283fab6812cfe6f273b49b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:53:02 compute-0 podman[99230]: 2026-02-01 14:53:02.954045268 +0000 UTC m=+0.169724537 container init 68d80571a00a2a910a4e45609edb0ab97d2aa9d28a38330c4144c9d5faf2c6e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_beaver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:53:02 compute-0 podman[99230]: 2026-02-01 14:53:02.962410762 +0000 UTC m=+0.178090011 container start 68d80571a00a2a910a4e45609edb0ab97d2aa9d28a38330c4144c9d5faf2c6e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:53:02 compute-0 podman[99230]: 2026-02-01 14:53:02.966266841 +0000 UTC m=+0.181946130 container attach 68d80571a00a2a910a4e45609edb0ab97d2aa9d28a38330c4144c9d5faf2c6e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_beaver, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:53:02 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Feb 01 14:53:02 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Feb 01 14:53:02 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Feb 01 14:53:02 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 69 pg[9.8( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[48,69)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:02 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 69 pg[9.8( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[48,69)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:02 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 69 pg[9.18( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[48,69)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:02 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 69 pg[9.18( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[48,69)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:02 compute-0 ceph-mon[75179]: pgmap v140: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 316 B/s, 7 objects/s recovering
Feb 01 14:53:02 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Feb 01 14:53:02 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Feb 01 14:53:02 compute-0 ceph-mon[75179]: osdmap e68: 3 total, 3 up, 3 in
Feb 01 14:53:02 compute-0 ceph-mon[75179]: 7.7 scrub starts
Feb 01 14:53:02 compute-0 ceph-mon[75179]: 7.7 scrub ok
Feb 01 14:53:02 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 69 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=68/69 n=1 ec=44/21 lis/c=44/44 les/c/f=45/45/0 sis=68) [2] r=0 lpr=68 pi=[44,68)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:53:02 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 69 pg[9.18( v 57'487 (0'0,57'487] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=0 lpr=69 pi=[48,69)/1 crt=57'486 lcod 57'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:02 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 69 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=0 lpr=69 pi=[48,69)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:02 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 69 pg[9.18( v 57'487 (0'0,57'487] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=0 lpr=69 pi=[48,69)/1 crt=57'486 lcod 57'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:02 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 69 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] r=0 lpr=69 pi=[48,69)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:03 compute-0 sharp_beaver[99247]: {
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:     "0": [
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:         {
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "devices": [
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "/dev/loop3"
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             ],
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "lv_name": "ceph_lv0",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "lv_size": "21470642176",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "name": "ceph_lv0",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "tags": {
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.cluster_name": "ceph",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.crush_device_class": "",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.encrypted": "0",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.objectstore": "bluestore",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.osd_id": "0",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.type": "block",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.vdo": "0",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.with_tpm": "0"
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             },
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "type": "block",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "vg_name": "ceph_vg0"
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:         }
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:     ],
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:     "1": [
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:         {
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "devices": [
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "/dev/loop4"
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             ],
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "lv_name": "ceph_lv1",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "lv_size": "21470642176",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "name": "ceph_lv1",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "tags": {
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.cluster_name": "ceph",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.crush_device_class": "",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.encrypted": "0",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.objectstore": "bluestore",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.osd_id": "1",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.type": "block",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.vdo": "0",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.with_tpm": "0"
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             },
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "type": "block",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "vg_name": "ceph_vg1"
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:         }
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:     ],
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:     "2": [
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:         {
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "devices": [
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "/dev/loop5"
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             ],
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "lv_name": "ceph_lv2",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "lv_size": "21470642176",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "name": "ceph_lv2",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "tags": {
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.cluster_name": "ceph",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.crush_device_class": "",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.encrypted": "0",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.objectstore": "bluestore",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.osd_id": "2",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.type": "block",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.vdo": "0",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:                 "ceph.with_tpm": "0"
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             },
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "type": "block",
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:             "vg_name": "ceph_vg2"
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:         }
Feb 01 14:53:03 compute-0 sharp_beaver[99247]:     ]
Feb 01 14:53:03 compute-0 sharp_beaver[99247]: }
Feb 01 14:53:03 compute-0 systemd[1]: libpod-68d80571a00a2a910a4e45609edb0ab97d2aa9d28a38330c4144c9d5faf2c6e9.scope: Deactivated successfully.
Feb 01 14:53:03 compute-0 podman[99230]: 2026-02-01 14:53:03.270434389 +0000 UTC m=+0.486113648 container died 68d80571a00a2a910a4e45609edb0ab97d2aa9d28a38330c4144c9d5faf2c6e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 01 14:53:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e258c02ff5051181ba60341e405f9816f8dd45feb283fab6812cfe6f273b49b-merged.mount: Deactivated successfully.
Feb 01 14:53:03 compute-0 podman[99230]: 2026-02-01 14:53:03.31468577 +0000 UTC m=+0.530364999 container remove 68d80571a00a2a910a4e45609edb0ab97d2aa9d28a38330c4144c9d5faf2c6e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_beaver, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:53:03 compute-0 systemd[1]: libpod-conmon-68d80571a00a2a910a4e45609edb0ab97d2aa9d28a38330c4144c9d5faf2c6e9.scope: Deactivated successfully.
Feb 01 14:53:03 compute-0 sudo[99152]: pam_unix(sudo:session): session closed for user root
Feb 01 14:53:03 compute-0 sudo[99274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:53:03 compute-0 sudo[99274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:53:03 compute-0 sudo[99274]: pam_unix(sudo:session): session closed for user root
Feb 01 14:53:03 compute-0 sudo[99299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 14:53:03 compute-0 sudo[99299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:53:03 compute-0 podman[99337]: 2026-02-01 14:53:03.723810901 +0000 UTC m=+0.051644483 container create 4554b5ad803e02ca43db9d2728c546022bd9af9f0b78df4835c4d4e40e08046a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_neumann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:53:03 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v143: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:53:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Feb 01 14:53:03 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb 01 14:53:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Feb 01 14:53:03 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb 01 14:53:03 compute-0 systemd[1]: Started libpod-conmon-4554b5ad803e02ca43db9d2728c546022bd9af9f0b78df4835c4d4e40e08046a.scope.
Feb 01 14:53:03 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Feb 01 14:53:03 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Feb 01 14:53:03 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:53:03 compute-0 podman[99337]: 2026-02-01 14:53:03.698141128 +0000 UTC m=+0.025974780 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:53:03 compute-0 podman[99337]: 2026-02-01 14:53:03.797836799 +0000 UTC m=+0.125670411 container init 4554b5ad803e02ca43db9d2728c546022bd9af9f0b78df4835c4d4e40e08046a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_neumann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:53:03 compute-0 podman[99337]: 2026-02-01 14:53:03.803166302 +0000 UTC m=+0.130999874 container start 4554b5ad803e02ca43db9d2728c546022bd9af9f0b78df4835c4d4e40e08046a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_neumann, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:53:03 compute-0 xenodochial_neumann[99353]: 167 167
Feb 01 14:53:03 compute-0 systemd[1]: libpod-4554b5ad803e02ca43db9d2728c546022bd9af9f0b78df4835c4d4e40e08046a.scope: Deactivated successfully.
Feb 01 14:53:03 compute-0 podman[99337]: 2026-02-01 14:53:03.808805772 +0000 UTC m=+0.136639434 container attach 4554b5ad803e02ca43db9d2728c546022bd9af9f0b78df4835c4d4e40e08046a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_neumann, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 01 14:53:03 compute-0 podman[99337]: 2026-02-01 14:53:03.809426606 +0000 UTC m=+0.137260198 container died 4554b5ad803e02ca43db9d2728c546022bd9af9f0b78df4835c4d4e40e08046a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_neumann, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:53:03 compute-0 sudo[98867]: pam_unix(sudo:session): session closed for user root
Feb 01 14:53:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-69967fd5be47176ac06a186c2c31a140cec5aa3afde7f01166261ce773d92215-merged.mount: Deactivated successfully.
Feb 01 14:53:03 compute-0 podman[99337]: 2026-02-01 14:53:03.845716643 +0000 UTC m=+0.173550215 container remove 4554b5ad803e02ca43db9d2728c546022bd9af9f0b78df4835c4d4e40e08046a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:53:03 compute-0 systemd[1]: libpod-conmon-4554b5ad803e02ca43db9d2728c546022bd9af9f0b78df4835c4d4e40e08046a.scope: Deactivated successfully.
Feb 01 14:53:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Feb 01 14:53:04 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Feb 01 14:53:04 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Feb 01 14:53:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Feb 01 14:53:04 compute-0 podman[99401]: 2026-02-01 14:53:04.031743026 +0000 UTC m=+0.100132522 container create 02440a0ac0a21ad867f8ea188072c8ff6d6c2568006a17fd1faa290acbff43f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 01 14:53:04 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Feb 01 14:53:04 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 70 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=70 pruub=13.064660072s) [0] r=-1 lpr=70 pi=[52,70)/1 crt=32'39 lcod 0'0 active pruub 132.241012573s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:04 compute-0 ceph-mon[75179]: osdmap e69: 3 total, 3 up, 3 in
Feb 01 14:53:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb 01 14:53:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb 01 14:53:04 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 70 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=52/53 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=70 pruub=13.064594269s) [0] r=-1 lpr=70 pi=[52,70)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 132.241012573s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:04 compute-0 podman[99401]: 2026-02-01 14:53:03.977460313 +0000 UTC m=+0.045849859 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:53:04 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 70 pg[6.9( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=70) [0] r=0 lpr=70 pi=[52,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:04 compute-0 systemd[1]: Started libpod-conmon-02440a0ac0a21ad867f8ea188072c8ff6d6c2568006a17fd1faa290acbff43f2.scope.
Feb 01 14:53:04 compute-0 sshd-session[98501]: Connection closed by 192.168.122.30 port 42726
Feb 01 14:53:04 compute-0 sshd-session[98498]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:53:04 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:53:04 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Feb 01 14:53:04 compute-0 systemd[1]: session-33.scope: Consumed 7.757s CPU time.
Feb 01 14:53:04 compute-0 systemd-logind[786]: Session 33 logged out. Waiting for processes to exit.
Feb 01 14:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce3b725ae19e7de63b1731e14ae2b73f1016fab5b080b589be953b76e7a5ec66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:53:04 compute-0 systemd-logind[786]: Removed session 33.
Feb 01 14:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce3b725ae19e7de63b1731e14ae2b73f1016fab5b080b589be953b76e7a5ec66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce3b725ae19e7de63b1731e14ae2b73f1016fab5b080b589be953b76e7a5ec66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce3b725ae19e7de63b1731e14ae2b73f1016fab5b080b589be953b76e7a5ec66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:53:04 compute-0 podman[99401]: 2026-02-01 14:53:04.148629073 +0000 UTC m=+0.217018619 container init 02440a0ac0a21ad867f8ea188072c8ff6d6c2568006a17fd1faa290acbff43f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 01 14:53:04 compute-0 podman[99401]: 2026-02-01 14:53:04.158267385 +0000 UTC m=+0.226656851 container start 02440a0ac0a21ad867f8ea188072c8ff6d6c2568006a17fd1faa290acbff43f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:53:04 compute-0 podman[99401]: 2026-02-01 14:53:04.161361227 +0000 UTC m=+0.229750783 container attach 02440a0ac0a21ad867f8ea188072c8ff6d6c2568006a17fd1faa290acbff43f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 01 14:53:04 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 70 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=69/70 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] async=[2] r=0 lpr=69 pi=[48,69)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:53:04 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 70 pg[9.18( v 57'487 (0'0,57'487] local-lis/les=69/70 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=69) [2]/[1] async=[2] r=0 lpr=69 pi=[48,69)/1 crt=57'487 lcod 57'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:53:04 compute-0 lvm[99495]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 14:53:04 compute-0 lvm[99495]: VG ceph_vg0 finished
Feb 01 14:53:04 compute-0 lvm[99498]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 14:53:04 compute-0 lvm[99498]: VG ceph_vg1 finished
Feb 01 14:53:04 compute-0 lvm[99500]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 14:53:04 compute-0 lvm[99500]: VG ceph_vg2 finished
Feb 01 14:53:04 compute-0 exciting_diffie[99418]: {}
Feb 01 14:53:04 compute-0 systemd[1]: libpod-02440a0ac0a21ad867f8ea188072c8ff6d6c2568006a17fd1faa290acbff43f2.scope: Deactivated successfully.
Feb 01 14:53:04 compute-0 systemd[1]: libpod-02440a0ac0a21ad867f8ea188072c8ff6d6c2568006a17fd1faa290acbff43f2.scope: Consumed 1.169s CPU time.
Feb 01 14:53:04 compute-0 podman[99401]: 2026-02-01 14:53:04.986408223 +0000 UTC m=+1.054797719 container died 02440a0ac0a21ad867f8ea188072c8ff6d6c2568006a17fd1faa290acbff43f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 01 14:53:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce3b725ae19e7de63b1731e14ae2b73f1016fab5b080b589be953b76e7a5ec66-merged.mount: Deactivated successfully.
Feb 01 14:53:05 compute-0 podman[99401]: 2026-02-01 14:53:05.034990504 +0000 UTC m=+1.103380000 container remove 02440a0ac0a21ad867f8ea188072c8ff6d6c2568006a17fd1faa290acbff43f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_diffie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 01 14:53:05 compute-0 systemd[1]: libpod-conmon-02440a0ac0a21ad867f8ea188072c8ff6d6c2568006a17fd1faa290acbff43f2.scope: Deactivated successfully.
Feb 01 14:53:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Feb 01 14:53:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Feb 01 14:53:05 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Feb 01 14:53:05 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 71 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=69/48 les/c/f=70/49/0 sis=71) [2] r=0 lpr=71 pi=[48,71)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:05 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 71 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=69/48 les/c/f=70/49/0 sis=71) [2] r=0 lpr=71 pi=[48,71)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:05 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 71 pg[9.18( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=69/48 les/c/f=70/49/0 sis=71) [2] r=0 lpr=71 pi=[48,71)/1 pct=0'0 crt=57'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:05 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 71 pg[9.18( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=69/48 les/c/f=70/49/0 sis=71) [2] r=0 lpr=71 pi=[48,71)/1 crt=57'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:05 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 71 pg[9.18( v 57'487 (0'0,57'487] local-lis/les=69/70 n=6 ec=48/32 lis/c=69/48 les/c/f=70/49/0 sis=71 pruub=15.237437248s) [2] async=[2] r=-1 lpr=71 pi=[48,71)/1 crt=57'487 lcod 57'486 active pruub 135.434600830s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:05 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 71 pg[9.18( v 57'487 (0'0,57'487] local-lis/les=69/70 n=6 ec=48/32 lis/c=69/48 les/c/f=70/49/0 sis=71 pruub=15.237270355s) [2] r=-1 lpr=71 pi=[48,71)/1 crt=57'487 lcod 57'486 unknown NOTIFY pruub 135.434600830s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:05 compute-0 ceph-mon[75179]: pgmap v143: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:53:05 compute-0 ceph-mon[75179]: 11.17 scrub starts
Feb 01 14:53:05 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 71 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=69/70 n=7 ec=48/32 lis/c=69/48 les/c/f=70/49/0 sis=71 pruub=15.236918449s) [2] async=[2] r=-1 lpr=71 pi=[48,71)/1 crt=38'483 lcod 0'0 active pruub 135.434539795s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:05 compute-0 ceph-mon[75179]: 11.17 scrub ok
Feb 01 14:53:05 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Feb 01 14:53:05 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Feb 01 14:53:05 compute-0 ceph-mon[75179]: osdmap e70: 3 total, 3 up, 3 in
Feb 01 14:53:05 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 71 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=69/70 n=7 ec=48/32 lis/c=69/48 les/c/f=70/49/0 sis=71 pruub=15.236736298s) [2] r=-1 lpr=71 pi=[48,71)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 135.434539795s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:05 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 71 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=70/71 n=1 ec=44/21 lis/c=52/52 les/c/f=53/53/0 sis=70) [0] r=0 lpr=70 pi=[52,70)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:53:05 compute-0 sudo[99299]: pam_unix(sudo:session): session closed for user root
Feb 01 14:53:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:53:05 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:53:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:53:05 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:53:05 compute-0 sudo[99514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 14:53:05 compute-0 sudo[99514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:53:05 compute-0 sudo[99514]: pam_unix(sudo:session): session closed for user root
Feb 01 14:53:05 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Feb 01 14:53:05 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Feb 01 14:53:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:53:05 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v146: 305 pgs: 2 peering, 303 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 131 B/s, 3 objects/s recovering
Feb 01 14:53:05 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Feb 01 14:53:05 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Feb 01 14:53:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Feb 01 14:53:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Feb 01 14:53:06 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Feb 01 14:53:06 compute-0 ceph-mon[75179]: osdmap e71: 3 total, 3 up, 3 in
Feb 01 14:53:06 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:53:06 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:53:06 compute-0 ceph-mon[75179]: 2.10 scrub starts
Feb 01 14:53:06 compute-0 ceph-mon[75179]: 2.10 scrub ok
Feb 01 14:53:06 compute-0 ceph-mon[75179]: 8.8 scrub starts
Feb 01 14:53:06 compute-0 ceph-mon[75179]: 8.8 scrub ok
Feb 01 14:53:06 compute-0 ceph-mon[75179]: osdmap e72: 3 total, 3 up, 3 in
Feb 01 14:53:06 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 72 pg[9.8( v 38'483 (0'0,38'483] local-lis/les=71/72 n=7 ec=48/32 lis/c=69/48 les/c/f=70/49/0 sis=71) [2] r=0 lpr=71 pi=[48,71)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:53:06 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 72 pg[9.18( v 57'487 (0'0,57'487] local-lis/les=71/72 n=6 ec=48/32 lis/c=69/48 les/c/f=70/49/0 sis=71) [2] r=0 lpr=71 pi=[48,71)/1 crt=57'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:53:06 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.a scrub starts
Feb 01 14:53:06 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.a scrub ok
Feb 01 14:53:07 compute-0 ceph-mon[75179]: pgmap v146: 305 pgs: 2 peering, 303 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 131 B/s, 3 objects/s recovering
Feb 01 14:53:07 compute-0 ceph-mon[75179]: 8.a scrub starts
Feb 01 14:53:07 compute-0 ceph-mon[75179]: 8.a scrub ok
Feb 01 14:53:07 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Feb 01 14:53:07 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Feb 01 14:53:07 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v148: 305 pgs: 2 peering, 303 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 110 B/s, 2 objects/s recovering
Feb 01 14:53:07 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Feb 01 14:53:07 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Feb 01 14:53:08 compute-0 ceph-mon[75179]: 5.17 scrub starts
Feb 01 14:53:08 compute-0 ceph-mon[75179]: 5.17 scrub ok
Feb 01 14:53:08 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Feb 01 14:53:08 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Feb 01 14:53:08 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Feb 01 14:53:08 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Feb 01 14:53:09 compute-0 ceph-mon[75179]: pgmap v148: 305 pgs: 2 peering, 303 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 110 B/s, 2 objects/s recovering
Feb 01 14:53:09 compute-0 ceph-mon[75179]: 7.1b scrub starts
Feb 01 14:53:09 compute-0 ceph-mon[75179]: 7.1b scrub ok
Feb 01 14:53:09 compute-0 ceph-mon[75179]: 5.8 scrub starts
Feb 01 14:53:09 compute-0 ceph-mon[75179]: 5.8 scrub ok
Feb 01 14:53:09 compute-0 ceph-mon[75179]: 11.0 scrub starts
Feb 01 14:53:09 compute-0 ceph-mon[75179]: 11.0 scrub ok
Feb 01 14:53:09 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v149: 305 pgs: 2 peering, 303 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 87 B/s, 2 objects/s recovering
Feb 01 14:53:09 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Feb 01 14:53:09 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Feb 01 14:53:10 compute-0 ceph-mon[75179]: 8.3 scrub starts
Feb 01 14:53:10 compute-0 ceph-mon[75179]: 8.3 scrub ok
Feb 01 14:53:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:53:10 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Feb 01 14:53:10 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Feb 01 14:53:11 compute-0 ceph-mon[75179]: pgmap v149: 305 pgs: 2 peering, 303 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 87 B/s, 2 objects/s recovering
Feb 01 14:53:11 compute-0 ceph-mon[75179]: 8.1 scrub starts
Feb 01 14:53:11 compute-0 ceph-mon[75179]: 8.1 scrub ok
Feb 01 14:53:11 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v150: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 68 B/s, 1 objects/s recovering
Feb 01 14:53:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Feb 01 14:53:11 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb 01 14:53:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Feb 01 14:53:11 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb 01 14:53:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Feb 01 14:53:12 compute-0 ceph-mon[75179]: pgmap v150: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 68 B/s, 1 objects/s recovering
Feb 01 14:53:12 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb 01 14:53:12 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb 01 14:53:12 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Feb 01 14:53:12 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Feb 01 14:53:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Feb 01 14:53:12 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Feb 01 14:53:12 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 73 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=54/55 n=1 ec=44/21 lis/c=54/54 les/c/f=55/55/0 sis=73 pruub=14.500942230s) [0] r=-1 lpr=73 pi=[54,73)/1 crt=32'39 lcod 0'0 active pruub 142.235549927s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:12 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 73 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=54/55 n=1 ec=44/21 lis/c=54/54 les/c/f=55/55/0 sis=73 pruub=14.500884056s) [0] r=-1 lpr=73 pi=[54,73)/1 crt=32'39 lcod 0'0 unknown NOTIFY pruub 142.235549927s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:12 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 73 pg[6.a( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=54/54 les/c/f=55/55/0 sis=73) [0] r=0 lpr=73 pi=[54,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:13 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Feb 01 14:53:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Feb 01 14:53:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Feb 01 14:53:13 compute-0 ceph-mon[75179]: osdmap e73: 3 total, 3 up, 3 in
Feb 01 14:53:13 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Feb 01 14:53:13 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Feb 01 14:53:13 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 74 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=73/74 n=1 ec=44/21 lis/c=54/54 les/c/f=55/55/0 sis=73) [0] r=0 lpr=73 pi=[54,73)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:53:13 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.e scrub starts
Feb 01 14:53:13 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.e scrub ok
Feb 01 14:53:13 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v153: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:53:13 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Feb 01 14:53:13 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb 01 14:53:13 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Feb 01 14:53:13 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb 01 14:53:14 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Feb 01 14:53:14 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Feb 01 14:53:14 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Feb 01 14:53:14 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Feb 01 14:53:14 compute-0 ceph-mon[75179]: osdmap e74: 3 total, 3 up, 3 in
Feb 01 14:53:14 compute-0 ceph-mon[75179]: 2.e scrub starts
Feb 01 14:53:14 compute-0 ceph-mon[75179]: 2.e scrub ok
Feb 01 14:53:14 compute-0 ceph-mon[75179]: pgmap v153: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:53:14 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb 01 14:53:14 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb 01 14:53:14 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Feb 01 14:53:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Feb 01 14:53:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Feb 01 14:53:15 compute-0 ceph-mon[75179]: osdmap e75: 3 total, 3 up, 3 in
Feb 01 14:53:15 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Feb 01 14:53:15 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Feb 01 14:53:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:53:15 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v155: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:53:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Feb 01 14:53:15 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb 01 14:53:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Feb 01 14:53:15 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb 01 14:53:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Feb 01 14:53:16 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Feb 01 14:53:16 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Feb 01 14:53:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Feb 01 14:53:16 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Feb 01 14:53:16 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 75 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=56/57 n=1 ec=44/21 lis/c=56/56 les/c/f=57/57/0 sis=75 pruub=12.965231895s) [1] r=-1 lpr=75 pi=[56,75)/1 crt=32'39 active pruub 147.773178101s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:16 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 76 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=56/57 n=1 ec=44/21 lis/c=56/56 les/c/f=57/57/0 sis=75 pruub=12.965162277s) [1] r=-1 lpr=75 pi=[56,75)/1 crt=32'39 unknown NOTIFY pruub 147.773178101s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:16 compute-0 ceph-mon[75179]: 10.5 scrub starts
Feb 01 14:53:16 compute-0 ceph-mon[75179]: 10.5 scrub ok
Feb 01 14:53:16 compute-0 ceph-mon[75179]: pgmap v155: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:53:16 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb 01 14:53:16 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb 01 14:53:16 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 76 pg[6.b( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=56/56 les/c/f=57/57/0 sis=75) [1] r=0 lpr=76 pi=[56,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:16 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 76 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=76 pruub=8.747318268s) [2] r=-1 lpr=76 pi=[48,76)/1 crt=38'483 lcod 0'0 active pruub 140.062316895s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:16 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 76 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=76 pruub=8.747279167s) [2] r=-1 lpr=76 pi=[48,76)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 140.062316895s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:16 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 76 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=76 pruub=8.750363350s) [2] r=-1 lpr=76 pi=[48,76)/1 crt=57'486 lcod 57'486 active pruub 140.066101074s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:16 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 76 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=76 pruub=8.750317574s) [2] r=-1 lpr=76 pi=[48,76)/1 crt=57'486 lcod 57'486 unknown NOTIFY pruub 140.066101074s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:16 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=76) [2] r=0 lpr=76 pi=[48,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:16 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=76) [2] r=0 lpr=76 pi=[48,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:16 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Feb 01 14:53:16 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Feb 01 14:53:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Feb 01 14:53:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Feb 01 14:53:17 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Feb 01 14:53:17 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 77 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=77) [2]/[1] r=0 lpr=77 pi=[48,77)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:17 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 77 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=48/49 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=77) [2]/[1] r=0 lpr=77 pi=[48,77)/1 crt=38'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:17 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 77 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=77) [2]/[1] r=0 lpr=77 pi=[48,77)/1 crt=57'486 lcod 57'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:17 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 77 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=48/49 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=77) [2]/[1] r=0 lpr=77 pi=[48,77)/1 crt=57'486 lcod 57'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:17 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[48,77)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:17 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[48,77)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:17 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[48,77)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:17 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=77) [2]/[1] r=-1 lpr=77 pi=[48,77)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:17 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Feb 01 14:53:17 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Feb 01 14:53:17 compute-0 ceph-mon[75179]: osdmap e76: 3 total, 3 up, 3 in
Feb 01 14:53:17 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 77 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=75/77 n=1 ec=44/21 lis/c=56/56 les/c/f=57/57/0 sis=75) [1] r=0 lpr=76 pi=[56,75)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:53:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_14:53:17
Feb 01 14:53:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 14:53:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 14:53:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'backups']
Feb 01 14:53:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 14:53:17 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v158: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:53:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Feb 01 14:53:17 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb 01 14:53:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Feb 01 14:53:17 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb 01 14:53:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Feb 01 14:53:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Feb 01 14:53:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Feb 01 14:53:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Feb 01 14:53:18 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Feb 01 14:53:18 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 78 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=77/78 n=7 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[48,77)/1 crt=38'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:53:18 compute-0 ceph-mon[75179]: 8.14 scrub starts
Feb 01 14:53:18 compute-0 ceph-mon[75179]: 8.14 scrub ok
Feb 01 14:53:18 compute-0 ceph-mon[75179]: osdmap e77: 3 total, 3 up, 3 in
Feb 01 14:53:18 compute-0 ceph-mon[75179]: pgmap v158: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:53:18 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb 01 14:53:18 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb 01 14:53:18 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Feb 01 14:53:18 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Feb 01 14:53:18 compute-0 ceph-mon[75179]: osdmap e78: 3 total, 3 up, 3 in
Feb 01 14:53:18 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 78 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=77/78 n=6 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=77) [2]/[1] async=[2] r=0 lpr=77 pi=[48,77)/1 crt=57'487 lcod 57'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:53:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:53:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:53:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:53:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:53:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:53:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:53:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 14:53:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 14:53:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 14:53:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 14:53:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 14:53:18 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 78 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=60/61 n=1 ec=44/21 lis/c=60/60 les/c/f=61/61/0 sis=78 pruub=12.373157501s) [1] r=-1 lpr=78 pi=[60,78)/1 crt=32'39 active pruub 149.470657349s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:18 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 78 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=60/61 n=1 ec=44/21 lis/c=60/60 les/c/f=61/61/0 sis=78 pruub=12.372897148s) [1] r=-1 lpr=78 pi=[60,78)/1 crt=32'39 unknown NOTIFY pruub 149.470657349s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 14:53:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 14:53:18 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 78 pg[6.d( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=60/60 les/c/f=61/61/0 sis=78) [1] r=0 lpr=78 pi=[60,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 14:53:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 14:53:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 14:53:18 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Feb 01 14:53:18 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Feb 01 14:53:18 compute-0 sshd-session[99539]: Accepted publickey for zuul from 192.168.122.30 port 37862 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:53:19 compute-0 systemd-logind[786]: New session 34 of user zuul.
Feb 01 14:53:19 compute-0 systemd[1]: Started Session 34 of User zuul.
Feb 01 14:53:19 compute-0 sshd-session[99539]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:53:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Feb 01 14:53:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Feb 01 14:53:19 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Feb 01 14:53:19 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 79 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=77/78 n=7 ec=48/32 lis/c=77/48 les/c/f=78/49/0 sis=79 pruub=14.960889816s) [2] async=[2] r=-1 lpr=79 pi=[48,79)/1 crt=38'483 lcod 0'0 active pruub 149.339111328s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:19 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 79 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=77/78 n=7 ec=48/32 lis/c=77/48 les/c/f=78/49/0 sis=79 pruub=14.960788727s) [2] r=-1 lpr=79 pi=[48,79)/1 crt=38'483 lcod 0'0 unknown NOTIFY pruub 149.339111328s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:19 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 79 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=77/78 n=6 ec=48/32 lis/c=77/48 les/c/f=78/49/0 sis=79 pruub=14.966958046s) [2] async=[2] r=-1 lpr=79 pi=[48,79)/1 crt=57'487 lcod 57'486 active pruub 149.346572876s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:19 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 79 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=77/78 n=6 ec=48/32 lis/c=77/48 les/c/f=78/49/0 sis=79 pruub=14.966865540s) [2] r=-1 lpr=79 pi=[48,79)/1 crt=57'487 lcod 57'486 unknown NOTIFY pruub 149.346572876s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:19 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 79 pg[6.d( v 32'39 lc 31'13 (0'0,32'39] local-lis/les=78/79 n=1 ec=44/21 lis/c=60/60 les/c/f=61/61/0 sis=78) [1] r=0 lpr=78 pi=[60,78)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:53:19 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 79 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=77/48 les/c/f=78/49/0 sis=79) [2] r=0 lpr=79 pi=[48,79)/1 pct=0'0 crt=57'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:19 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 79 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=77/48 les/c/f=78/49/0 sis=79) [2] r=0 lpr=79 pi=[48,79)/1 crt=57'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:19 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 79 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=77/48 les/c/f=78/49/0 sis=79) [2] r=0 lpr=79 pi=[48,79)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:19 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 79 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=0/0 n=7 ec=48/32 lis/c=77/48 les/c/f=78/49/0 sis=79) [2] r=0 lpr=79 pi=[48,79)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:19 compute-0 python3.9[99692]: ansible-ansible.legacy.ping Invoked with data=pong
Feb 01 14:53:19 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v161: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:53:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Feb 01 14:53:19 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb 01 14:53:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Feb 01 14:53:19 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb 01 14:53:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Feb 01 14:53:20 compute-0 ceph-mon[75179]: 10.16 scrub starts
Feb 01 14:53:20 compute-0 ceph-mon[75179]: 10.16 scrub ok
Feb 01 14:53:20 compute-0 ceph-mon[75179]: osdmap e79: 3 total, 3 up, 3 in
Feb 01 14:53:20 compute-0 ceph-mon[75179]: pgmap v161: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:53:20 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb 01 14:53:20 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb 01 14:53:20 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Feb 01 14:53:20 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Feb 01 14:53:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Feb 01 14:53:20 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Feb 01 14:53:20 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 80 pg[9.c( v 38'483 (0'0,38'483] local-lis/les=79/80 n=7 ec=48/32 lis/c=77/48 les/c/f=78/49/0 sis=79) [2] r=0 lpr=79 pi=[48,79)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:53:20 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 80 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=77/48 les/c/f=78/49/0 sis=79) [2] r=0 lpr=79 pi=[48,79)/1 crt=57'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:53:20 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.a scrub starts
Feb 01 14:53:20 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.a scrub ok
Feb 01 14:53:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:53:20 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Feb 01 14:53:20 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Feb 01 14:53:20 compute-0 python3.9[99866]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:53:21 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Feb 01 14:53:21 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Feb 01 14:53:21 compute-0 ceph-mon[75179]: osdmap e80: 3 total, 3 up, 3 in
Feb 01 14:53:21 compute-0 ceph-mon[75179]: 5.a scrub starts
Feb 01 14:53:21 compute-0 ceph-mon[75179]: 5.a scrub ok
Feb 01 14:53:21 compute-0 ceph-mon[75179]: 8.0 scrub starts
Feb 01 14:53:21 compute-0 ceph-mon[75179]: 8.0 scrub ok
Feb 01 14:53:21 compute-0 sudo[100020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvbsbmvbgcvgcwgstigzktwiklqanagu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957601.1855085-40-134121616738079/AnsiballZ_command.py'
Feb 01 14:53:21 compute-0 sudo[100020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:53:21 compute-0 python3.9[100022]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:53:21 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v163: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 107 B/s, 3 objects/s recovering
Feb 01 14:53:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Feb 01 14:53:21 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb 01 14:53:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Feb 01 14:53:21 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb 01 14:53:21 compute-0 sudo[100020]: pam_unix(sudo:session): session closed for user root
Feb 01 14:53:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Feb 01 14:53:22 compute-0 ceph-mon[75179]: pgmap v163: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 107 B/s, 3 objects/s recovering
Feb 01 14:53:22 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb 01 14:53:22 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb 01 14:53:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Feb 01 14:53:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Feb 01 14:53:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Feb 01 14:53:22 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Feb 01 14:53:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 81 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=56/57 n=1 ec=44/21 lis/c=56/56 les/c/f=57/57/0 sis=81 pruub=14.861714363s) [2] r=-1 lpr=81 pi=[56,81)/1 crt=32'39 active pruub 155.773696899s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:22 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 81 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=56/57 n=1 ec=44/21 lis/c=56/56 les/c/f=57/57/0 sis=81 pruub=14.861543655s) [2] r=-1 lpr=81 pi=[56,81)/1 crt=32'39 unknown NOTIFY pruub 155.773696899s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=44/21 lis/c=56/56 les/c/f=57/57/0 sis=81) [2] r=0 lpr=81 pi=[56,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:22 compute-0 sudo[100173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnouoplibacraplhrkcvancansthragd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957602.0919242-52-10187443411309/AnsiballZ_stat.py'
Feb 01 14:53:22 compute-0 sudo[100173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:53:22 compute-0 python3.9[100175]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:53:22 compute-0 sudo[100173]: pam_unix(sudo:session): session closed for user root
Feb 01 14:53:23 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Feb 01 14:53:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Feb 01 14:53:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Feb 01 14:53:23 compute-0 ceph-mon[75179]: osdmap e81: 3 total, 3 up, 3 in
Feb 01 14:53:23 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Feb 01 14:53:23 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Feb 01 14:53:23 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 82 pg[6.f( v 32'39 lc 31'1 (0'0,32'39] local-lis/les=81/82 n=1 ec=44/21 lis/c=56/56 les/c/f=57/57/0 sis=81) [2] r=0 lpr=81 pi=[56,81)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:53:23 compute-0 sudo[100327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyrzondoertnhdgnjpfbgmgznddmojlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957602.9661963-63-134864385380009/AnsiballZ_file.py'
Feb 01 14:53:23 compute-0 sudo[100327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:53:23 compute-0 python3.9[100329]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:53:23 compute-0 sudo[100327]: pam_unix(sudo:session): session closed for user root
Feb 01 14:53:23 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.b scrub starts
Feb 01 14:53:23 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.b scrub ok
Feb 01 14:53:23 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v166: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 108 B/s, 3 objects/s recovering
Feb 01 14:53:23 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Feb 01 14:53:23 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Feb 01 14:53:23 compute-0 sudo[100479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iicunbsvirndkkcwmbygdbwdfghpsxob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957603.750583-72-96313692259272/AnsiballZ_file.py'
Feb 01 14:53:23 compute-0 sudo[100479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:53:24 compute-0 python3.9[100481]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:53:24 compute-0 sudo[100479]: pam_unix(sudo:session): session closed for user root
Feb 01 14:53:24 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Feb 01 14:53:24 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Feb 01 14:53:24 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Feb 01 14:53:24 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Feb 01 14:53:24 compute-0 ceph-mon[75179]: osdmap e82: 3 total, 3 up, 3 in
Feb 01 14:53:24 compute-0 ceph-mon[75179]: 3.b scrub starts
Feb 01 14:53:24 compute-0 ceph-mon[75179]: 3.b scrub ok
Feb 01 14:53:24 compute-0 ceph-mon[75179]: pgmap v166: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 108 B/s, 3 objects/s recovering
Feb 01 14:53:24 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Feb 01 14:53:24 compute-0 python3.9[100631]: ansible-ansible.builtin.service_facts Invoked
Feb 01 14:53:24 compute-0 network[100648]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 01 14:53:24 compute-0 network[100649]: 'network-scripts' will be removed from distribution in near future.
Feb 01 14:53:24 compute-0 network[100650]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 01 14:53:25 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Feb 01 14:53:25 compute-0 ceph-mon[75179]: osdmap e83: 3 total, 3 up, 3 in
Feb 01 14:53:25 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.c scrub starts
Feb 01 14:53:25 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.c scrub ok
Feb 01 14:53:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:53:25 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v168: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 201 B/s, 3 objects/s recovering
Feb 01 14:53:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Feb 01 14:53:25 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Feb 01 14:53:25 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Feb 01 14:53:25 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Feb 01 14:53:26 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.b scrub starts
Feb 01 14:53:26 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.b scrub ok
Feb 01 14:53:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Feb 01 14:53:26 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Feb 01 14:53:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Feb 01 14:53:26 compute-0 ceph-mon[75179]: 2.c scrub starts
Feb 01 14:53:26 compute-0 ceph-mon[75179]: 2.c scrub ok
Feb 01 14:53:26 compute-0 ceph-mon[75179]: pgmap v168: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 201 B/s, 3 objects/s recovering
Feb 01 14:53:26 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Feb 01 14:53:26 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Feb 01 14:53:27 compute-0 ceph-mon[75179]: 2.11 scrub starts
Feb 01 14:53:27 compute-0 ceph-mon[75179]: 2.11 scrub ok
Feb 01 14:53:27 compute-0 ceph-mon[75179]: 5.b scrub starts
Feb 01 14:53:27 compute-0 ceph-mon[75179]: 5.b scrub ok
Feb 01 14:53:27 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Feb 01 14:53:27 compute-0 ceph-mon[75179]: osdmap e84: 3 total, 3 up, 3 in
Feb 01 14:53:27 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.c scrub starts
Feb 01 14:53:27 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.c scrub ok
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v170: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 112 B/s, 0 objects/s recovering
Feb 01 14:53:27 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Feb 01 14:53:27 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Feb 01 14:53:27 compute-0 python3.9[100910]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7614284514635656e-06 of space, bias 4.0, pg target 0.0021137141417562786 quantized to 16 (current 16)
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:53:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 14:53:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Feb 01 14:53:28 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Feb 01 14:53:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Feb 01 14:53:28 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Feb 01 14:53:28 compute-0 ceph-mon[75179]: 11.c scrub starts
Feb 01 14:53:28 compute-0 ceph-mon[75179]: 11.c scrub ok
Feb 01 14:53:28 compute-0 ceph-mon[75179]: pgmap v170: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 112 B/s, 0 objects/s recovering
Feb 01 14:53:28 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Feb 01 14:53:28 compute-0 python3.9[101060]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:53:28 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Feb 01 14:53:28 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Feb 01 14:53:29 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Feb 01 14:53:29 compute-0 ceph-mon[75179]: osdmap e85: 3 total, 3 up, 3 in
Feb 01 14:53:29 compute-0 ceph-mon[75179]: 3.4 scrub starts
Feb 01 14:53:29 compute-0 ceph-mon[75179]: 3.4 scrub ok
Feb 01 14:53:29 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v172: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 0 objects/s recovering
Feb 01 14:53:29 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Feb 01 14:53:29 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Feb 01 14:53:29 compute-0 python3.9[101214]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:53:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Feb 01 14:53:30 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Feb 01 14:53:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Feb 01 14:53:30 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Feb 01 14:53:30 compute-0 sudo[101370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wifkfjguquhnwwutigtdtvzswstcpegm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957610.1199696-120-33494985505365/AnsiballZ_setup.py'
Feb 01 14:53:30 compute-0 ceph-mon[75179]: pgmap v172: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 0 objects/s recovering
Feb 01 14:53:30 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Feb 01 14:53:30 compute-0 sudo[101370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:53:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:53:30 compute-0 python3.9[101372]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 01 14:53:31 compute-0 sudo[101370]: pam_unix(sudo:session): session closed for user root
Feb 01 14:53:31 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Feb 01 14:53:31 compute-0 ceph-mon[75179]: osdmap e86: 3 total, 3 up, 3 in
Feb 01 14:53:31 compute-0 sudo[101454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lchywpbcxxafgabsfdnxjgfoudvfdjee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957610.1199696-120-33494985505365/AnsiballZ_dnf.py'
Feb 01 14:53:31 compute-0 sudo[101454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:53:31 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 86 pg[9.13( v 57'485 (0'0,57'485] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=86 pruub=13.644948006s) [2] r=-1 lpr=86 pi=[56,86)/1 crt=55'484 lcod 55'484 active pruub 163.773727417s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:31 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 86 pg[9.13( v 57'485 (0'0,57'485] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=86 pruub=13.644883156s) [2] r=-1 lpr=86 pi=[56,86)/1 crt=55'484 lcod 55'484 unknown NOTIFY pruub 163.773727417s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:31 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 86 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=86) [2] r=0 lpr=86 pi=[56,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:31 compute-0 python3.9[101456]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 14:53:31 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v174: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:53:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Feb 01 14:53:31 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Feb 01 14:53:32 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Feb 01 14:53:32 compute-0 ceph-mon[75179]: pgmap v174: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:53:32 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Feb 01 14:53:32 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Feb 01 14:53:32 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Feb 01 14:53:32 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Feb 01 14:53:32 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 87 pg[9.13( v 57'485 (0'0,57'485] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=87) [2]/[0] r=0 lpr=87 pi=[56,87)/1 crt=55'484 lcod 55'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:32 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 87 pg[9.13( v 57'485 (0'0,57'485] local-lis/les=56/57 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=87) [2]/[0] r=0 lpr=87 pi=[56,87)/1 crt=55'484 lcod 55'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:32 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[56,87)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:32 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[56,87)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:33 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Feb 01 14:53:33 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Feb 01 14:53:33 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Feb 01 14:53:33 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Feb 01 14:53:33 compute-0 ceph-mon[75179]: osdmap e87: 3 total, 3 up, 3 in
Feb 01 14:53:33 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 88 pg[9.13( v 57'485 (0'0,57'485] local-lis/les=87/88 n=6 ec=48/32 lis/c=56/56 les/c/f=57/57/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[56,87)/1 crt=57'485 lcod 55'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:53:33 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Feb 01 14:53:33 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Feb 01 14:53:33 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v177: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:53:33 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Feb 01 14:53:33 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Feb 01 14:53:34 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Feb 01 14:53:34 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Feb 01 14:53:34 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Feb 01 14:53:34 compute-0 ceph-mon[75179]: osdmap e88: 3 total, 3 up, 3 in
Feb 01 14:53:34 compute-0 ceph-mon[75179]: 7.0 scrub starts
Feb 01 14:53:34 compute-0 ceph-mon[75179]: 7.0 scrub ok
Feb 01 14:53:34 compute-0 ceph-mon[75179]: pgmap v177: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:53:34 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Feb 01 14:53:34 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Feb 01 14:53:34 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 89 pg[9.13( v 57'485 (0'0,57'485] local-lis/les=87/88 n=6 ec=48/32 lis/c=87/56 les/c/f=88/57/0 sis=89 pruub=14.976616859s) [2] async=[2] r=-1 lpr=89 pi=[56,89)/1 crt=57'485 lcod 55'484 active pruub 168.043777466s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:34 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 89 pg[9.13( v 57'485 (0'0,57'485] local-lis/les=87/88 n=6 ec=48/32 lis/c=87/56 les/c/f=88/57/0 sis=89 pruub=14.976508141s) [2] r=-1 lpr=89 pi=[56,89)/1 crt=57'485 lcod 55'484 unknown NOTIFY pruub 168.043777466s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:34 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 89 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=89 pruub=9.675940514s) [1] r=-1 lpr=89 pi=[55,89)/1 crt=38'483 active pruub 162.744354248s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:34 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 89 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=89 pruub=9.675878525s) [1] r=-1 lpr=89 pi=[55,89)/1 crt=38'483 unknown NOTIFY pruub 162.744354248s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:34 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 89 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=89) [1] r=0 lpr=89 pi=[55,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:34 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 89 pg[9.13( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=87/56 les/c/f=88/57/0 sis=89) [2] r=0 lpr=89 pi=[56,89)/1 pct=0'0 crt=57'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:34 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 89 pg[9.13( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=87/56 les/c/f=88/57/0 sis=89) [2] r=0 lpr=89 pi=[56,89)/1 crt=57'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:53:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Feb 01 14:53:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Feb 01 14:53:35 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Feb 01 14:53:35 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 90 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[55,90)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:35 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 90 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=90) [1]/[0] r=0 lpr=90 pi=[55,90)/1 crt=38'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:35 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 90 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=55/56 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=90) [1]/[0] r=0 lpr=90 pi=[55,90)/1 crt=38'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:35 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 90 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[55,90)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:35 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Feb 01 14:53:35 compute-0 ceph-mon[75179]: osdmap e89: 3 total, 3 up, 3 in
Feb 01 14:53:35 compute-0 ceph-mon[75179]: osdmap e90: 3 total, 3 up, 3 in
Feb 01 14:53:35 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 90 pg[9.13( v 57'485 (0'0,57'485] local-lis/les=89/90 n=6 ec=48/32 lis/c=87/56 les/c/f=88/57/0 sis=89) [2] r=0 lpr=89 pi=[56,89)/1 crt=57'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:53:35 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Feb 01 14:53:35 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Feb 01 14:53:35 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v180: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Feb 01 14:53:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Feb 01 14:53:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Feb 01 14:53:36 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Feb 01 14:53:36 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Feb 01 14:53:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Feb 01 14:53:36 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Feb 01 14:53:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Feb 01 14:53:36 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Feb 01 14:53:36 compute-0 ceph-mon[75179]: 3.0 scrub starts
Feb 01 14:53:36 compute-0 ceph-mon[75179]: 3.0 scrub ok
Feb 01 14:53:36 compute-0 ceph-mon[75179]: pgmap v180: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Feb 01 14:53:36 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Feb 01 14:53:36 compute-0 ceph-mon[75179]: 5.14 scrub starts
Feb 01 14:53:36 compute-0 ceph-mon[75179]: 5.14 scrub ok
Feb 01 14:53:36 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Feb 01 14:53:36 compute-0 ceph-mon[75179]: osdmap e91: 3 total, 3 up, 3 in
Feb 01 14:53:36 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 91 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=90/91 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[55,90)/1 crt=38'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:53:36 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.a scrub starts
Feb 01 14:53:36 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.a scrub ok
Feb 01 14:53:37 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Feb 01 14:53:37 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Feb 01 14:53:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Feb 01 14:53:37 compute-0 ceph-mon[75179]: 11.a scrub starts
Feb 01 14:53:37 compute-0 ceph-mon[75179]: 11.a scrub ok
Feb 01 14:53:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Feb 01 14:53:37 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Feb 01 14:53:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 91 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=91 pruub=14.964743614s) [0] r=-1 lpr=91 pi=[65,91)/1 crt=38'483 active pruub 163.869369507s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:37 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 92 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=91 pruub=14.964620590s) [0] r=-1 lpr=91 pi=[65,91)/1 crt=38'483 unknown NOTIFY pruub 163.869369507s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 92 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=91) [0] r=0 lpr=92 pi=[65,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 92 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=90/91 n=6 ec=48/32 lis/c=90/55 les/c/f=91/56/0 sis=92 pruub=15.021860123s) [1] async=[1] r=-1 lpr=92 pi=[55,92)/1 crt=38'483 active pruub 171.127532959s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:37 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 92 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=90/91 n=6 ec=48/32 lis/c=90/55 les/c/f=91/56/0 sis=92 pruub=15.021791458s) [1] r=-1 lpr=92 pi=[55,92)/1 crt=38'483 unknown NOTIFY pruub 171.127532959s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 92 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=90/55 les/c/f=91/56/0 sis=92) [1] r=0 lpr=92 pi=[55,92)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:37 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 92 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=90/55 les/c/f=91/56/0 sis=92) [1] r=0 lpr=92 pi=[55,92)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:37 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 1 remapped+peering, 1 active+clean+scrubbing, 303 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Feb 01 14:53:38 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Feb 01 14:53:38 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Feb 01 14:53:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Feb 01 14:53:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Feb 01 14:53:38 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Feb 01 14:53:38 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 93 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=93) [0]/[2] r=0 lpr=93 pi=[65,93)/2 crt=38'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:38 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 93 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=93) [0]/[2] r=0 lpr=93 pi=[65,93)/2 crt=38'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:38 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[65,93)/2 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:38 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[65,93)/2 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:38 compute-0 ceph-mon[75179]: 10.3 scrub starts
Feb 01 14:53:38 compute-0 ceph-mon[75179]: 10.3 scrub ok
Feb 01 14:53:38 compute-0 ceph-mon[75179]: osdmap e92: 3 total, 3 up, 3 in
Feb 01 14:53:38 compute-0 ceph-mon[75179]: pgmap v183: 305 pgs: 1 remapped+peering, 1 active+clean+scrubbing, 303 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Feb 01 14:53:38 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 93 pg[9.15( v 38'483 (0'0,38'483] local-lis/les=92/93 n=6 ec=48/32 lis/c=90/55 les/c/f=91/56/0 sis=92) [1] r=0 lpr=92 pi=[55,92)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:53:39 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Feb 01 14:53:39 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Feb 01 14:53:39 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Feb 01 14:53:39 compute-0 ceph-mon[75179]: 2.0 scrub starts
Feb 01 14:53:39 compute-0 ceph-mon[75179]: 2.0 scrub ok
Feb 01 14:53:39 compute-0 ceph-mon[75179]: osdmap e93: 3 total, 3 up, 3 in
Feb 01 14:53:39 compute-0 ceph-mon[75179]: osdmap e94: 3 total, 3 up, 3 in
Feb 01 14:53:39 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 94 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=93/94 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[65,93)/2 crt=38'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:53:39 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 1 remapped+peering, 1 active+clean+scrubbing, 303 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:53:39 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Feb 01 14:53:39 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Feb 01 14:53:40 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Feb 01 14:53:40 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Feb 01 14:53:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:53:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Feb 01 14:53:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Feb 01 14:53:40 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Feb 01 14:53:40 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 95 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=93/94 n=6 ec=48/32 lis/c=93/65 les/c/f=94/66/0 sis=95 pruub=15.071164131s) [0] async=[0] r=-1 lpr=95 pi=[65,95)/2 crt=38'483 active pruub 166.934234619s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 95 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=93/65 les/c/f=94/66/0 sis=95) [0] r=0 lpr=95 pi=[65,95)/2 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:40 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 95 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=93/65 les/c/f=94/66/0 sis=95) [0] r=0 lpr=95 pi=[65,95)/2 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:40 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 95 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=93/94 n=6 ec=48/32 lis/c=93/65 les/c/f=94/66/0 sis=95 pruub=15.071048737s) [0] r=-1 lpr=95 pi=[65,95)/2 crt=38'483 unknown NOTIFY pruub 166.934234619s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:40 compute-0 ceph-mon[75179]: pgmap v186: 305 pgs: 1 remapped+peering, 1 active+clean+scrubbing, 303 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:53:40 compute-0 ceph-mon[75179]: 3.2 scrub starts
Feb 01 14:53:40 compute-0 ceph-mon[75179]: 3.2 scrub ok
Feb 01 14:53:40 compute-0 ceph-mon[75179]: 2.13 scrub starts
Feb 01 14:53:40 compute-0 ceph-mon[75179]: 2.13 scrub ok
Feb 01 14:53:40 compute-0 ceph-mon[75179]: osdmap e95: 3 total, 3 up, 3 in
Feb 01 14:53:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Feb 01 14:53:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Feb 01 14:53:41 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Feb 01 14:53:41 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 96 pg[9.16( v 38'483 (0'0,38'483] local-lis/les=95/96 n=6 ec=48/32 lis/c=93/65 les/c/f=94/66/0 sis=95) [0] r=0 lpr=95 pi=[65,95)/2 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:53:41 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 1 objects/s recovering
Feb 01 14:53:42 compute-0 ceph-mon[75179]: osdmap e96: 3 total, 3 up, 3 in
Feb 01 14:53:42 compute-0 ceph-mon[75179]: pgmap v189: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 1 objects/s recovering
Feb 01 14:53:42 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.d scrub starts
Feb 01 14:53:42 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.d scrub ok
Feb 01 14:53:43 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Feb 01 14:53:43 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Feb 01 14:53:43 compute-0 ceph-mon[75179]: 7.d scrub starts
Feb 01 14:53:43 compute-0 ceph-mon[75179]: 7.d scrub ok
Feb 01 14:53:43 compute-0 ceph-mon[75179]: 5.15 scrub starts
Feb 01 14:53:43 compute-0 ceph-mon[75179]: 5.15 scrub ok
Feb 01 14:53:43 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 41 B/s, 1 objects/s recovering
Feb 01 14:53:44 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Feb 01 14:53:44 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Feb 01 14:53:44 compute-0 ceph-mon[75179]: pgmap v190: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 41 B/s, 1 objects/s recovering
Feb 01 14:53:45 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Feb 01 14:53:45 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Feb 01 14:53:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:53:45 compute-0 ceph-mon[75179]: 5.0 scrub starts
Feb 01 14:53:45 compute-0 ceph-mon[75179]: 5.0 scrub ok
Feb 01 14:53:45 compute-0 ceph-mon[75179]: 10.1 scrub starts
Feb 01 14:53:45 compute-0 ceph-mon[75179]: 10.1 scrub ok
Feb 01 14:53:45 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v191: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 35 B/s, 1 objects/s recovering
Feb 01 14:53:46 compute-0 ceph-mon[75179]: pgmap v191: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 35 B/s, 1 objects/s recovering
Feb 01 14:53:47 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v192: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Feb 01 14:53:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Feb 01 14:53:47 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Feb 01 14:53:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Feb 01 14:53:47 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Feb 01 14:53:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Feb 01 14:53:47 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Feb 01 14:53:47 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Feb 01 14:53:48 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Feb 01 14:53:48 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Feb 01 14:53:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:53:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:53:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:53:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:53:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:53:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:53:48 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Feb 01 14:53:48 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Feb 01 14:53:48 compute-0 ceph-mon[75179]: pgmap v192: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Feb 01 14:53:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Feb 01 14:53:48 compute-0 ceph-mon[75179]: osdmap e97: 3 total, 3 up, 3 in
Feb 01 14:53:48 compute-0 ceph-mon[75179]: 10.0 scrub starts
Feb 01 14:53:48 compute-0 ceph-mon[75179]: 10.0 scrub ok
Feb 01 14:53:49 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Feb 01 14:53:49 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Feb 01 14:53:49 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v194: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:53:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Feb 01 14:53:49 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Feb 01 14:53:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Feb 01 14:53:49 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Feb 01 14:53:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Feb 01 14:53:49 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Feb 01 14:53:49 compute-0 ceph-mon[75179]: 8.7 scrub starts
Feb 01 14:53:49 compute-0 ceph-mon[75179]: 8.7 scrub ok
Feb 01 14:53:49 compute-0 ceph-mon[75179]: 2.8 scrub starts
Feb 01 14:53:49 compute-0 ceph-mon[75179]: 2.8 scrub ok
Feb 01 14:53:49 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Feb 01 14:53:50 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Feb 01 14:53:50 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Feb 01 14:53:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:53:50 compute-0 ceph-mon[75179]: pgmap v194: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:53:50 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Feb 01 14:53:50 compute-0 ceph-mon[75179]: osdmap e98: 3 total, 3 up, 3 in
Feb 01 14:53:50 compute-0 ceph-mon[75179]: 2.1 scrub starts
Feb 01 14:53:50 compute-0 ceph-mon[75179]: 2.1 scrub ok
Feb 01 14:53:51 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.a scrub starts
Feb 01 14:53:51 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.a scrub ok
Feb 01 14:53:51 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:53:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Feb 01 14:53:51 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Feb 01 14:53:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Feb 01 14:53:51 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Feb 01 14:53:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Feb 01 14:53:51 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 99 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=55/56 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99 pruub=8.273852348s) [2] r=-1 lpr=99 pi=[55,99)/1 crt=57'486 lcod 57'486 active pruub 178.746078491s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:51 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 99 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=55/56 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99 pruub=8.273748398s) [2] r=-1 lpr=99 pi=[55,99)/1 crt=57'486 lcod 57'486 unknown NOTIFY pruub 178.746078491s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:51 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Feb 01 14:53:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:51 compute-0 ceph-mon[75179]: 3.a scrub starts
Feb 01 14:53:51 compute-0 ceph-mon[75179]: 3.a scrub ok
Feb 01 14:53:51 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Feb 01 14:53:52 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.b scrub starts
Feb 01 14:53:52 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.b scrub ok
Feb 01 14:53:52 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Feb 01 14:53:52 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Feb 01 14:53:52 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Feb 01 14:53:52 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 100 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=55/56 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=0 lpr=100 pi=[55,100)/1 crt=57'486 lcod 57'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:52 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 100 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=55/56 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=0 lpr=100 pi=[55,100)/1 crt=57'486 lcod 57'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:52 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:52 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:52 compute-0 ceph-mon[75179]: pgmap v196: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:53:52 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Feb 01 14:53:52 compute-0 ceph-mon[75179]: osdmap e99: 3 total, 3 up, 3 in
Feb 01 14:53:52 compute-0 ceph-mon[75179]: 2.b scrub starts
Feb 01 14:53:52 compute-0 ceph-mon[75179]: 2.b scrub ok
Feb 01 14:53:52 compute-0 ceph-mon[75179]: osdmap e100: 3 total, 3 up, 3 in
Feb 01 14:53:53 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Feb 01 14:53:53 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Feb 01 14:53:53 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:53:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Feb 01 14:53:53 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Feb 01 14:53:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Feb 01 14:53:53 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Feb 01 14:53:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Feb 01 14:53:53 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Feb 01 14:53:53 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 101 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=100/101 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[55,100)/1 crt=57'487 lcod 57'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:53:53 compute-0 ceph-mon[75179]: 5.6 scrub starts
Feb 01 14:53:53 compute-0 ceph-mon[75179]: 5.6 scrub ok
Feb 01 14:53:53 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Feb 01 14:53:53 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Feb 01 14:53:53 compute-0 ceph-mon[75179]: osdmap e101: 3 total, 3 up, 3 in
Feb 01 14:53:54 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Feb 01 14:53:54 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Feb 01 14:53:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Feb 01 14:53:54 compute-0 ceph-mon[75179]: pgmap v199: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:53:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Feb 01 14:53:54 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Feb 01 14:53:54 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=100/101 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102 pruub=14.973391533s) [2] async=[2] r=-1 lpr=102 pi=[55,102)/1 crt=57'487 lcod 57'486 active pruub 188.486877441s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:54 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=100/101 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102 pruub=14.973290443s) [2] r=-1 lpr=102 pi=[55,102)/1 crt=57'487 lcod 57'486 unknown NOTIFY pruub 188.486877441s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:53:54 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 pct=0'0 crt=57'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:53:54 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:53:55 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.c scrub starts
Feb 01 14:53:55 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.c scrub ok
Feb 01 14:53:55 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.a scrub starts
Feb 01 14:53:55 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.a scrub ok
Feb 01 14:53:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:53:55 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v202: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 103 B/s, 2 objects/s recovering
Feb 01 14:53:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.d scrub starts
Feb 01 14:53:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.d scrub ok
Feb 01 14:53:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Feb 01 14:53:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Feb 01 14:53:55 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Feb 01 14:53:55 compute-0 ceph-mon[75179]: 11.5 scrub starts
Feb 01 14:53:55 compute-0 ceph-mon[75179]: 11.5 scrub ok
Feb 01 14:53:55 compute-0 ceph-mon[75179]: osdmap e102: 3 total, 3 up, 3 in
Feb 01 14:53:55 compute-0 ceph-mon[75179]: 8.c scrub starts
Feb 01 14:53:55 compute-0 ceph-mon[75179]: 8.c scrub ok
Feb 01 14:53:55 compute-0 ceph-mon[75179]: 10.a scrub starts
Feb 01 14:53:55 compute-0 ceph-mon[75179]: 10.a scrub ok
Feb 01 14:53:55 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 103 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=102/103 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:53:56 compute-0 ceph-mon[75179]: pgmap v202: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 103 B/s, 2 objects/s recovering
Feb 01 14:53:56 compute-0 ceph-mon[75179]: 3.d scrub starts
Feb 01 14:53:56 compute-0 ceph-mon[75179]: 3.d scrub ok
Feb 01 14:53:56 compute-0 ceph-mon[75179]: osdmap e103: 3 total, 3 up, 3 in
Feb 01 14:53:57 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.f scrub starts
Feb 01 14:53:57 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.f scrub ok
Feb 01 14:53:57 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 84 B/s, 1 objects/s recovering
Feb 01 14:53:57 compute-0 ceph-mon[75179]: 11.f scrub starts
Feb 01 14:53:57 compute-0 ceph-mon[75179]: 11.f scrub ok
Feb 01 14:53:58 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Feb 01 14:53:58 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Feb 01 14:53:58 compute-0 ceph-mon[75179]: pgmap v204: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 84 B/s, 1 objects/s recovering
Feb 01 14:53:58 compute-0 ceph-mon[75179]: 5.3 scrub starts
Feb 01 14:53:58 compute-0 ceph-mon[75179]: 5.3 scrub ok
Feb 01 14:53:59 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Feb 01 14:53:59 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Feb 01 14:53:59 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v205: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Feb 01 14:53:59 compute-0 ceph-mon[75179]: 8.5 scrub starts
Feb 01 14:53:59 compute-0 ceph-mon[75179]: 8.5 scrub ok
Feb 01 14:54:00 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.c scrub starts
Feb 01 14:54:00 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.c scrub ok
Feb 01 14:54:00 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Feb 01 14:54:00 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Feb 01 14:54:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:54:00 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Feb 01 14:54:00 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Feb 01 14:54:00 compute-0 ceph-mon[75179]: pgmap v205: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Feb 01 14:54:00 compute-0 ceph-mon[75179]: 10.c scrub starts
Feb 01 14:54:00 compute-0 ceph-mon[75179]: 10.c scrub ok
Feb 01 14:54:00 compute-0 ceph-mon[75179]: 3.6 scrub starts
Feb 01 14:54:00 compute-0 ceph-mon[75179]: 3.6 scrub ok
Feb 01 14:54:00 compute-0 ceph-mon[75179]: 11.7 scrub starts
Feb 01 14:54:00 compute-0 ceph-mon[75179]: 11.7 scrub ok
Feb 01 14:54:01 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.b scrub starts
Feb 01 14:54:01 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.b scrub ok
Feb 01 14:54:01 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v206: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 52 B/s, 1 objects/s recovering
Feb 01 14:54:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Feb 01 14:54:01 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Feb 01 14:54:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Feb 01 14:54:01 compute-0 ceph-mon[75179]: 7.b scrub starts
Feb 01 14:54:01 compute-0 ceph-mon[75179]: 7.b scrub ok
Feb 01 14:54:01 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Feb 01 14:54:01 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Feb 01 14:54:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Feb 01 14:54:01 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Feb 01 14:54:02 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.e scrub starts
Feb 01 14:54:02 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.e scrub ok
Feb 01 14:54:02 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.e scrub starts
Feb 01 14:54:02 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.e scrub ok
Feb 01 14:54:02 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Feb 01 14:54:02 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Feb 01 14:54:02 compute-0 ceph-mon[75179]: pgmap v206: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 52 B/s, 1 objects/s recovering
Feb 01 14:54:02 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Feb 01 14:54:02 compute-0 ceph-mon[75179]: osdmap e104: 3 total, 3 up, 3 in
Feb 01 14:54:02 compute-0 ceph-mon[75179]: 5.e scrub starts
Feb 01 14:54:02 compute-0 ceph-mon[75179]: 5.e scrub ok
Feb 01 14:54:02 compute-0 ceph-mon[75179]: 11.e scrub starts
Feb 01 14:54:02 compute-0 ceph-mon[75179]: 11.e scrub ok
Feb 01 14:54:02 compute-0 ceph-mon[75179]: 7.14 scrub starts
Feb 01 14:54:02 compute-0 ceph-mon[75179]: 7.14 scrub ok
Feb 01 14:54:03 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v208: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Feb 01 14:54:03 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Feb 01 14:54:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Feb 01 14:54:03 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Feb 01 14:54:03 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Feb 01 14:54:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Feb 01 14:54:04 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Feb 01 14:54:04 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105 pruub=12.064671516s) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 active pruub 187.699172974s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:54:04 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105 pruub=12.064629555s) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 unknown NOTIFY pruub 187.699172974s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:54:04 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=0 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:54:04 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Feb 01 14:54:04 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Feb 01 14:54:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Feb 01 14:54:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Feb 01 14:54:05 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Feb 01 14:54:05 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 106 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:54:05 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 106 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:54:05 compute-0 ceph-mon[75179]: pgmap v208: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:05 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Feb 01 14:54:05 compute-0 ceph-mon[75179]: osdmap e105: 3 total, 3 up, 3 in
Feb 01 14:54:05 compute-0 ceph-mon[75179]: 3.10 scrub starts
Feb 01 14:54:05 compute-0 ceph-mon[75179]: 3.10 scrub ok
Feb 01 14:54:05 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:54:05 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:54:05 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.d scrub starts
Feb 01 14:54:05 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.d scrub ok
Feb 01 14:54:05 compute-0 sudo[101605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:54:05 compute-0 sudo[101605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:54:05 compute-0 sudo[101605]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:05 compute-0 sudo[101630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 14:54:05 compute-0 sudo[101630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:54:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:54:05 compute-0 sudo[101630]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:54:05 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:54:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 14:54:05 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:54:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 14:54:05 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:54:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 14:54:05 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:54:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 14:54:05 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:54:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:54:05 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:54:05 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:05 compute-0 sudo[101686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:54:05 compute-0 sudo[101686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:54:05 compute-0 sudo[101686]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:05 compute-0 sudo[101711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 14:54:05 compute-0 sudo[101711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:54:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Feb 01 14:54:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Feb 01 14:54:06 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Feb 01 14:54:06 compute-0 ceph-mon[75179]: osdmap e106: 3 total, 3 up, 3 in
Feb 01 14:54:06 compute-0 ceph-mon[75179]: 5.d scrub starts
Feb 01 14:54:06 compute-0 ceph-mon[75179]: 5.d scrub ok
Feb 01 14:54:06 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:54:06 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:54:06 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:54:06 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:54:06 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:54:06 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:54:06 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:54:06 compute-0 podman[101750]: 2026-02-01 14:54:06.107436639 +0000 UTC m=+0.044505823 container create 6e8f2bbe3b8ecef7b83f53d0add90a6f091efdd13066343fda33e5da2a446fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 01 14:54:06 compute-0 systemd[1]: Started libpod-conmon-6e8f2bbe3b8ecef7b83f53d0add90a6f091efdd13066343fda33e5da2a446fc0.scope.
Feb 01 14:54:06 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:54:06 compute-0 podman[101750]: 2026-02-01 14:54:06.082008429 +0000 UTC m=+0.019077663 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:54:06 compute-0 podman[101750]: 2026-02-01 14:54:06.186430103 +0000 UTC m=+0.123499337 container init 6e8f2bbe3b8ecef7b83f53d0add90a6f091efdd13066343fda33e5da2a446fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 01 14:54:06 compute-0 podman[101750]: 2026-02-01 14:54:06.194353134 +0000 UTC m=+0.131422318 container start 6e8f2bbe3b8ecef7b83f53d0add90a6f091efdd13066343fda33e5da2a446fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_boyd, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:54:06 compute-0 flamboyant_boyd[101766]: 167 167
Feb 01 14:54:06 compute-0 systemd[1]: libpod-6e8f2bbe3b8ecef7b83f53d0add90a6f091efdd13066343fda33e5da2a446fc0.scope: Deactivated successfully.
Feb 01 14:54:06 compute-0 podman[101750]: 2026-02-01 14:54:06.199154788 +0000 UTC m=+0.136223972 container attach 6e8f2bbe3b8ecef7b83f53d0add90a6f091efdd13066343fda33e5da2a446fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_boyd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 01 14:54:06 compute-0 podman[101750]: 2026-02-01 14:54:06.199475897 +0000 UTC m=+0.136545081 container died 6e8f2bbe3b8ecef7b83f53d0add90a6f091efdd13066343fda33e5da2a446fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 01 14:54:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f602a4d7c7890c421c9fff58c0da8205e9e65094ef0930f6546bd2bc37dd4b1e-merged.mount: Deactivated successfully.
Feb 01 14:54:06 compute-0 podman[101750]: 2026-02-01 14:54:06.24401889 +0000 UTC m=+0.181088054 container remove 6e8f2bbe3b8ecef7b83f53d0add90a6f091efdd13066343fda33e5da2a446fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_boyd, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 01 14:54:06 compute-0 systemd[1]: libpod-conmon-6e8f2bbe3b8ecef7b83f53d0add90a6f091efdd13066343fda33e5da2a446fc0.scope: Deactivated successfully.
Feb 01 14:54:06 compute-0 podman[101789]: 2026-02-01 14:54:06.392628648 +0000 UTC m=+0.044042330 container create c07dc46e0af92c468673dc606c52f51a3f03898c2b1a48cc0df92c8b04b445e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:54:06 compute-0 systemd[1]: Started libpod-conmon-c07dc46e0af92c468673dc606c52f51a3f03898c2b1a48cc0df92c8b04b445e1.scope.
Feb 01 14:54:06 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:54:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa8ab999c00936ad17414f083501e55e484fd3159517098ac9aedf161d424d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:54:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa8ab999c00936ad17414f083501e55e484fd3159517098ac9aedf161d424d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:54:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa8ab999c00936ad17414f083501e55e484fd3159517098ac9aedf161d424d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:54:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa8ab999c00936ad17414f083501e55e484fd3159517098ac9aedf161d424d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:54:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa8ab999c00936ad17414f083501e55e484fd3159517098ac9aedf161d424d0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:54:06 compute-0 podman[101789]: 2026-02-01 14:54:06.465022768 +0000 UTC m=+0.116436450 container init c07dc46e0af92c468673dc606c52f51a3f03898c2b1a48cc0df92c8b04b445e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:54:06 compute-0 podman[101789]: 2026-02-01 14:54:06.372443235 +0000 UTC m=+0.023856937 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:54:06 compute-0 podman[101789]: 2026-02-01 14:54:06.475609734 +0000 UTC m=+0.127023406 container start c07dc46e0af92c468673dc606c52f51a3f03898c2b1a48cc0df92c8b04b445e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 01 14:54:06 compute-0 podman[101789]: 2026-02-01 14:54:06.479224515 +0000 UTC m=+0.130638257 container attach c07dc46e0af92c468673dc606c52f51a3f03898c2b1a48cc0df92c8b04b445e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Feb 01 14:54:06 compute-0 hardcore_chatterjee[101806]: --> passed data devices: 0 physical, 3 LVM
Feb 01 14:54:06 compute-0 hardcore_chatterjee[101806]: --> All data devices are unavailable
Feb 01 14:54:06 compute-0 systemd[1]: libpod-c07dc46e0af92c468673dc606c52f51a3f03898c2b1a48cc0df92c8b04b445e1.scope: Deactivated successfully.
Feb 01 14:54:06 compute-0 podman[101789]: 2026-02-01 14:54:06.970936978 +0000 UTC m=+0.622350640 container died c07dc46e0af92c468673dc606c52f51a3f03898c2b1a48cc0df92c8b04b445e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Feb 01 14:54:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfa8ab999c00936ad17414f083501e55e484fd3159517098ac9aedf161d424d0-merged.mount: Deactivated successfully.
Feb 01 14:54:07 compute-0 podman[101789]: 2026-02-01 14:54:07.023453444 +0000 UTC m=+0.674867146 container remove c07dc46e0af92c468673dc606c52f51a3f03898c2b1a48cc0df92c8b04b445e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:54:07 compute-0 systemd[1]: libpod-conmon-c07dc46e0af92c468673dc606c52f51a3f03898c2b1a48cc0df92c8b04b445e1.scope: Deactivated successfully.
Feb 01 14:54:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Feb 01 14:54:07 compute-0 ceph-mon[75179]: pgmap v211: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:07 compute-0 ceph-mon[75179]: osdmap e107: 3 total, 3 up, 3 in
Feb 01 14:54:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Feb 01 14:54:07 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Feb 01 14:54:07 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108 pruub=14.977913857s) [0] async=[0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 active pruub 193.453887939s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:54:07 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108 pruub=14.977710724s) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY pruub 193.453887939s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:54:07 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 pct=0'0 crt=57'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:54:07 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:54:07 compute-0 sudo[101711]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:07 compute-0 sudo[101840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:54:07 compute-0 sudo[101840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:54:07 compute-0 sudo[101840]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:07 compute-0 sudo[101865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 14:54:07 compute-0 sudo[101865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:54:07 compute-0 podman[101902]: 2026-02-01 14:54:07.499637573 +0000 UTC m=+0.045734507 container create 7f60c61266316d3df968dd9284c6871df3ab1f31af4cc4c007c9b077ad165682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 01 14:54:07 compute-0 systemd[1]: Started libpod-conmon-7f60c61266316d3df968dd9284c6871df3ab1f31af4cc4c007c9b077ad165682.scope.
Feb 01 14:54:07 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:54:07 compute-0 podman[101902]: 2026-02-01 14:54:07.482771112 +0000 UTC m=+0.028868076 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:54:07 compute-0 podman[101902]: 2026-02-01 14:54:07.582208137 +0000 UTC m=+0.128305141 container init 7f60c61266316d3df968dd9284c6871df3ab1f31af4cc4c007c9b077ad165682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_agnesi, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 01 14:54:07 compute-0 podman[101902]: 2026-02-01 14:54:07.587522756 +0000 UTC m=+0.133619720 container start 7f60c61266316d3df968dd9284c6871df3ab1f31af4cc4c007c9b077ad165682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_agnesi, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:54:07 compute-0 podman[101902]: 2026-02-01 14:54:07.593202054 +0000 UTC m=+0.139298978 container attach 7f60c61266316d3df968dd9284c6871df3ab1f31af4cc4c007c9b077ad165682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 01 14:54:07 compute-0 elegant_agnesi[101919]: 167 167
Feb 01 14:54:07 compute-0 systemd[1]: libpod-7f60c61266316d3df968dd9284c6871df3ab1f31af4cc4c007c9b077ad165682.scope: Deactivated successfully.
Feb 01 14:54:07 compute-0 podman[101902]: 2026-02-01 14:54:07.594071538 +0000 UTC m=+0.140168502 container died 7f60c61266316d3df968dd9284c6871df3ab1f31af4cc4c007c9b077ad165682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_agnesi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:54:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-27938188ca148d3415040e8a8212c0662c5118535d5cf00d77a178e03958f685-merged.mount: Deactivated successfully.
Feb 01 14:54:07 compute-0 podman[101902]: 2026-02-01 14:54:07.627328766 +0000 UTC m=+0.173425700 container remove 7f60c61266316d3df968dd9284c6871df3ab1f31af4cc4c007c9b077ad165682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:54:07 compute-0 systemd[1]: libpod-conmon-7f60c61266316d3df968dd9284c6871df3ab1f31af4cc4c007c9b077ad165682.scope: Deactivated successfully.
Feb 01 14:54:07 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Feb 01 14:54:07 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v214: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:07 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Feb 01 14:54:07 compute-0 podman[101943]: 2026-02-01 14:54:07.783562577 +0000 UTC m=+0.065772507 container create 6181cf6690420fe9553f2e9783867b6057cb3bf632c016f698df991ba9d50ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hellman, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:54:07 compute-0 systemd[1]: Started libpod-conmon-6181cf6690420fe9553f2e9783867b6057cb3bf632c016f698df991ba9d50ad0.scope.
Feb 01 14:54:07 compute-0 podman[101943]: 2026-02-01 14:54:07.751538283 +0000 UTC m=+0.033748263 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:54:07 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ed0f6e9cb0b5c9cf2f1e8042a0e905c4fdb2263318488b88d3dc14fda0adff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ed0f6e9cb0b5c9cf2f1e8042a0e905c4fdb2263318488b88d3dc14fda0adff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ed0f6e9cb0b5c9cf2f1e8042a0e905c4fdb2263318488b88d3dc14fda0adff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ed0f6e9cb0b5c9cf2f1e8042a0e905c4fdb2263318488b88d3dc14fda0adff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:54:07 compute-0 podman[101943]: 2026-02-01 14:54:07.896226291 +0000 UTC m=+0.178436211 container init 6181cf6690420fe9553f2e9783867b6057cb3bf632c016f698df991ba9d50ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 01 14:54:07 compute-0 podman[101943]: 2026-02-01 14:54:07.902751253 +0000 UTC m=+0.184961153 container start 6181cf6690420fe9553f2e9783867b6057cb3bf632c016f698df991ba9d50ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hellman, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 01 14:54:07 compute-0 podman[101943]: 2026-02-01 14:54:07.906484567 +0000 UTC m=+0.188694487 container attach 6181cf6690420fe9553f2e9783867b6057cb3bf632c016f698df991ba9d50ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 01 14:54:08 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Feb 01 14:54:08 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Feb 01 14:54:08 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Feb 01 14:54:08 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=108/109 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:54:08 compute-0 ceph-mon[75179]: osdmap e108: 3 total, 3 up, 3 in
Feb 01 14:54:08 compute-0 ceph-mon[75179]: 7.16 scrub starts
Feb 01 14:54:08 compute-0 ceph-mon[75179]: 7.16 scrub ok
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]: {
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:     "0": [
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:         {
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "devices": [
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "/dev/loop3"
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             ],
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "lv_name": "ceph_lv0",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "lv_size": "21470642176",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "name": "ceph_lv0",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "tags": {
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.cluster_name": "ceph",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.crush_device_class": "",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.encrypted": "0",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.objectstore": "bluestore",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.osd_id": "0",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.type": "block",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.vdo": "0",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.with_tpm": "0"
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             },
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "type": "block",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "vg_name": "ceph_vg0"
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:         }
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:     ],
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:     "1": [
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:         {
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "devices": [
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "/dev/loop4"
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             ],
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "lv_name": "ceph_lv1",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "lv_size": "21470642176",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "name": "ceph_lv1",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "tags": {
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.cluster_name": "ceph",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.crush_device_class": "",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.encrypted": "0",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.objectstore": "bluestore",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.osd_id": "1",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.type": "block",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.vdo": "0",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.with_tpm": "0"
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             },
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "type": "block",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "vg_name": "ceph_vg1"
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:         }
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:     ],
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:     "2": [
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:         {
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "devices": [
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "/dev/loop5"
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             ],
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "lv_name": "ceph_lv2",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "lv_size": "21470642176",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "name": "ceph_lv2",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "tags": {
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.cluster_name": "ceph",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.crush_device_class": "",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.encrypted": "0",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.objectstore": "bluestore",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.osd_id": "2",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.type": "block",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.vdo": "0",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:                 "ceph.with_tpm": "0"
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             },
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "type": "block",
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:             "vg_name": "ceph_vg2"
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:         }
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]:     ]
Feb 01 14:54:08 compute-0 thirsty_hellman[101960]: }
Feb 01 14:54:08 compute-0 systemd[1]: libpod-6181cf6690420fe9553f2e9783867b6057cb3bf632c016f698df991ba9d50ad0.scope: Deactivated successfully.
Feb 01 14:54:08 compute-0 podman[101943]: 2026-02-01 14:54:08.257088432 +0000 UTC m=+0.539298372 container died 6181cf6690420fe9553f2e9783867b6057cb3bf632c016f698df991ba9d50ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 01 14:54:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0ed0f6e9cb0b5c9cf2f1e8042a0e905c4fdb2263318488b88d3dc14fda0adff-merged.mount: Deactivated successfully.
Feb 01 14:54:08 compute-0 podman[101943]: 2026-02-01 14:54:08.318184537 +0000 UTC m=+0.600394457 container remove 6181cf6690420fe9553f2e9783867b6057cb3bf632c016f698df991ba9d50ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hellman, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:54:08 compute-0 systemd[1]: libpod-conmon-6181cf6690420fe9553f2e9783867b6057cb3bf632c016f698df991ba9d50ad0.scope: Deactivated successfully.
Feb 01 14:54:08 compute-0 sudo[101865]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:08 compute-0 sudo[101980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:54:08 compute-0 sudo[101980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:54:08 compute-0 sudo[101980]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:08 compute-0 sudo[102005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 14:54:08 compute-0 sudo[102005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:54:08 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Feb 01 14:54:08 compute-0 podman[102042]: 2026-02-01 14:54:08.723801117 +0000 UTC m=+0.041901980 container create 833ac2b26a960b2b3175f3189a3d1e1eba6778ec4a3ea86a40bf0495e3b1c037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gates, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Feb 01 14:54:08 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Feb 01 14:54:08 compute-0 systemd[1]: Started libpod-conmon-833ac2b26a960b2b3175f3189a3d1e1eba6778ec4a3ea86a40bf0495e3b1c037.scope.
Feb 01 14:54:08 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:54:08 compute-0 podman[102042]: 2026-02-01 14:54:08.701528386 +0000 UTC m=+0.019629199 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:54:08 compute-0 podman[102042]: 2026-02-01 14:54:08.805065435 +0000 UTC m=+0.123166358 container init 833ac2b26a960b2b3175f3189a3d1e1eba6778ec4a3ea86a40bf0495e3b1c037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gates, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:54:08 compute-0 podman[102042]: 2026-02-01 14:54:08.812790541 +0000 UTC m=+0.130891394 container start 833ac2b26a960b2b3175f3189a3d1e1eba6778ec4a3ea86a40bf0495e3b1c037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gates, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 01 14:54:08 compute-0 podman[102042]: 2026-02-01 14:54:08.816495404 +0000 UTC m=+0.134596317 container attach 833ac2b26a960b2b3175f3189a3d1e1eba6778ec4a3ea86a40bf0495e3b1c037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gates, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 01 14:54:08 compute-0 systemd[1]: libpod-833ac2b26a960b2b3175f3189a3d1e1eba6778ec4a3ea86a40bf0495e3b1c037.scope: Deactivated successfully.
Feb 01 14:54:08 compute-0 nice_gates[102058]: 167 167
Feb 01 14:54:08 compute-0 conmon[102058]: conmon 833ac2b26a960b2b3175 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-833ac2b26a960b2b3175f3189a3d1e1eba6778ec4a3ea86a40bf0495e3b1c037.scope/container/memory.events
Feb 01 14:54:08 compute-0 podman[102042]: 2026-02-01 14:54:08.81920893 +0000 UTC m=+0.137309773 container died 833ac2b26a960b2b3175f3189a3d1e1eba6778ec4a3ea86a40bf0495e3b1c037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gates, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:54:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-593bf71f63e169de258920f10740b68cf7a8e0924e10a6aae28c9b4fbbaee207-merged.mount: Deactivated successfully.
Feb 01 14:54:08 compute-0 podman[102042]: 2026-02-01 14:54:08.866993133 +0000 UTC m=+0.185093996 container remove 833ac2b26a960b2b3175f3189a3d1e1eba6778ec4a3ea86a40bf0495e3b1c037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gates, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:54:08 compute-0 systemd[1]: libpod-conmon-833ac2b26a960b2b3175f3189a3d1e1eba6778ec4a3ea86a40bf0495e3b1c037.scope: Deactivated successfully.
Feb 01 14:54:09 compute-0 podman[102079]: 2026-02-01 14:54:09.041462743 +0000 UTC m=+0.054267766 container create 13675cabba0f58956fd0229b5a194921a131de61b0949ec8c14c6e7b658e2ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclean, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 01 14:54:09 compute-0 ceph-mon[75179]: pgmap v214: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:09 compute-0 ceph-mon[75179]: osdmap e109: 3 total, 3 up, 3 in
Feb 01 14:54:09 compute-0 ceph-mon[75179]: 8.19 scrub starts
Feb 01 14:54:09 compute-0 ceph-mon[75179]: 8.19 scrub ok
Feb 01 14:54:09 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Feb 01 14:54:09 compute-0 systemd[1]: Started libpod-conmon-13675cabba0f58956fd0229b5a194921a131de61b0949ec8c14c6e7b658e2ebb.scope.
Feb 01 14:54:09 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Feb 01 14:54:09 compute-0 podman[102079]: 2026-02-01 14:54:09.013896183 +0000 UTC m=+0.026701266 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:54:09 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:54:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9371cc6183666b943effd8bfaae61be0d02294b78a58384f763c1d4c8f34e7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:54:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9371cc6183666b943effd8bfaae61be0d02294b78a58384f763c1d4c8f34e7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:54:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9371cc6183666b943effd8bfaae61be0d02294b78a58384f763c1d4c8f34e7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:54:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9371cc6183666b943effd8bfaae61be0d02294b78a58384f763c1d4c8f34e7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:54:09 compute-0 podman[102079]: 2026-02-01 14:54:09.139730114 +0000 UTC m=+0.152535137 container init 13675cabba0f58956fd0229b5a194921a131de61b0949ec8c14c6e7b658e2ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclean, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 01 14:54:09 compute-0 podman[102079]: 2026-02-01 14:54:09.145524616 +0000 UTC m=+0.158329639 container start 13675cabba0f58956fd0229b5a194921a131de61b0949ec8c14c6e7b658e2ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclean, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:54:09 compute-0 podman[102079]: 2026-02-01 14:54:09.149197958 +0000 UTC m=+0.162003041 container attach 13675cabba0f58956fd0229b5a194921a131de61b0949ec8c14c6e7b658e2ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclean, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:54:09 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:09 compute-0 lvm[102176]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 14:54:09 compute-0 lvm[102176]: VG ceph_vg1 finished
Feb 01 14:54:09 compute-0 lvm[102175]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 14:54:09 compute-0 lvm[102175]: VG ceph_vg0 finished
Feb 01 14:54:09 compute-0 lvm[102178]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 14:54:09 compute-0 lvm[102178]: VG ceph_vg2 finished
Feb 01 14:54:09 compute-0 mystifying_mclean[102096]: {}
Feb 01 14:54:09 compute-0 systemd[1]: libpod-13675cabba0f58956fd0229b5a194921a131de61b0949ec8c14c6e7b658e2ebb.scope: Deactivated successfully.
Feb 01 14:54:09 compute-0 podman[102079]: 2026-02-01 14:54:09.907038438 +0000 UTC m=+0.919843491 container died 13675cabba0f58956fd0229b5a194921a131de61b0949ec8c14c6e7b658e2ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclean, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:54:09 compute-0 systemd[1]: libpod-13675cabba0f58956fd0229b5a194921a131de61b0949ec8c14c6e7b658e2ebb.scope: Consumed 1.087s CPU time.
Feb 01 14:54:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9371cc6183666b943effd8bfaae61be0d02294b78a58384f763c1d4c8f34e7e-merged.mount: Deactivated successfully.
Feb 01 14:54:09 compute-0 podman[102079]: 2026-02-01 14:54:09.942399425 +0000 UTC m=+0.955204418 container remove 13675cabba0f58956fd0229b5a194921a131de61b0949ec8c14c6e7b658e2ebb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclean, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 01 14:54:09 compute-0 systemd[1]: libpod-conmon-13675cabba0f58956fd0229b5a194921a131de61b0949ec8c14c6e7b658e2ebb.scope: Deactivated successfully.
Feb 01 14:54:09 compute-0 sudo[102005]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:54:09 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:54:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:54:09 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:54:10 compute-0 sudo[102192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 14:54:10 compute-0 sudo[102192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:54:10 compute-0 sudo[102192]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:10 compute-0 ceph-mon[75179]: 5.1c scrub starts
Feb 01 14:54:10 compute-0 ceph-mon[75179]: 5.1c scrub ok
Feb 01 14:54:10 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:54:10 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:54:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:54:10 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Feb 01 14:54:10 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Feb 01 14:54:11 compute-0 ceph-mon[75179]: pgmap v216: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:11 compute-0 ceph-mon[75179]: 3.13 scrub starts
Feb 01 14:54:11 compute-0 ceph-mon[75179]: 3.13 scrub ok
Feb 01 14:54:11 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Feb 01 14:54:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Feb 01 14:54:11 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Feb 01 14:54:11 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Feb 01 14:54:11 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Feb 01 14:54:12 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Feb 01 14:54:12 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Feb 01 14:54:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Feb 01 14:54:12 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Feb 01 14:54:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Feb 01 14:54:12 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Feb 01 14:54:12 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Feb 01 14:54:12 compute-0 ceph-mon[75179]: 7.17 scrub starts
Feb 01 14:54:12 compute-0 ceph-mon[75179]: 7.17 scrub ok
Feb 01 14:54:12 compute-0 sudo[101454]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:13 compute-0 ceph-mon[75179]: pgmap v217: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Feb 01 14:54:13 compute-0 ceph-mon[75179]: 5.1b scrub starts
Feb 01 14:54:13 compute-0 ceph-mon[75179]: 5.1b scrub ok
Feb 01 14:54:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Feb 01 14:54:13 compute-0 ceph-mon[75179]: osdmap e110: 3 total, 3 up, 3 in
Feb 01 14:54:13 compute-0 sudo[102366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prkngrplpszprdomfuirnyjdqodymwpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957653.064186-132-29179282025656/AnsiballZ_command.py'
Feb 01 14:54:13 compute-0 sudo[102366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:13 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 62 B/s, 1 objects/s recovering
Feb 01 14:54:13 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Feb 01 14:54:13 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Feb 01 14:54:13 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Feb 01 14:54:13 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Feb 01 14:54:13 compute-0 python3.9[102368]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:54:14 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Feb 01 14:54:14 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Feb 01 14:54:14 compute-0 ceph-mon[75179]: 7.10 scrub starts
Feb 01 14:54:14 compute-0 ceph-mon[75179]: 7.10 scrub ok
Feb 01 14:54:14 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Feb 01 14:54:14 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Feb 01 14:54:14 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Feb 01 14:54:14 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=10.319766998s) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 active pruub 195.875000000s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:54:14 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=10.319724083s) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 unknown NOTIFY pruub 195.875000000s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:54:14 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:54:14 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Feb 01 14:54:14 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Feb 01 14:54:14 compute-0 sudo[102366]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:14 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Feb 01 14:54:14 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Feb 01 14:54:15 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Feb 01 14:54:15 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Feb 01 14:54:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Feb 01 14:54:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Feb 01 14:54:15 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Feb 01 14:54:15 compute-0 ceph-mon[75179]: pgmap v219: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 62 B/s, 1 objects/s recovering
Feb 01 14:54:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Feb 01 14:54:15 compute-0 ceph-mon[75179]: osdmap e111: 3 total, 3 up, 3 in
Feb 01 14:54:15 compute-0 ceph-mon[75179]: 5.2 scrub starts
Feb 01 14:54:15 compute-0 ceph-mon[75179]: 5.2 scrub ok
Feb 01 14:54:15 compute-0 ceph-mon[75179]: 3.14 scrub starts
Feb 01 14:54:15 compute-0 ceph-mon[75179]: 3.14 scrub ok
Feb 01 14:54:15 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:54:15 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:54:15 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:54:15 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:54:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:54:15 compute-0 sudo[102653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjvawvgxhxgrovtvdafukqanvujckndo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957654.8855426-140-112964407052778/AnsiballZ_selinux.py'
Feb 01 14:54:15 compute-0 sudo[102653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:15 compute-0 python3.9[102655]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Feb 01 14:54:15 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Feb 01 14:54:15 compute-0 sudo[102653]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Feb 01 14:54:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Feb 01 14:54:16 compute-0 ceph-mon[75179]: 2.1e scrub starts
Feb 01 14:54:16 compute-0 ceph-mon[75179]: 2.1e scrub ok
Feb 01 14:54:16 compute-0 ceph-mon[75179]: osdmap e112: 3 total, 3 up, 3 in
Feb 01 14:54:16 compute-0 ceph-mon[75179]: pgmap v222: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Feb 01 14:54:16 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Feb 01 14:54:16 compute-0 sudo[102805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgvncqglwrrscqgbscbtgwvmcuburctl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957656.0325975-151-57164796773197/AnsiballZ_command.py'
Feb 01 14:54:16 compute-0 sudo[102805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:16 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:54:16 compute-0 python3.9[102807]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Feb 01 14:54:16 compute-0 sudo[102805]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:16 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Feb 01 14:54:16 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Feb 01 14:54:16 compute-0 sudo[102957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opbniyyrcnmfqljjbmhemgdhfnnfafpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957656.6122937-159-102223458184617/AnsiballZ_file.py'
Feb 01 14:54:16 compute-0 sudo[102957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:17 compute-0 python3.9[102959]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:54:17 compute-0 sudo[102957]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Feb 01 14:54:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Feb 01 14:54:17 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Feb 01 14:54:17 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 pct=0'0 crt=57'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:54:17 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:54:17 compute-0 ceph-mon[75179]: osdmap e113: 3 total, 3 up, 3 in
Feb 01 14:54:17 compute-0 ceph-mon[75179]: 11.1d scrub starts
Feb 01 14:54:17 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.244387627s) [0] async=[0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 active pruub 203.837112427s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:54:17 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.244242668s) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY pruub 203.837112427s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:54:17 compute-0 ceph-mon[75179]: 11.1d scrub ok
Feb 01 14:54:17 compute-0 sudo[103109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyybvjmyuoctniyeonnwqtfuubiguwln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957657.2636795-167-162780977710239/AnsiballZ_mount.py'
Feb 01 14:54:17 compute-0 sudo[103109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_14:54:17
Feb 01 14:54:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 14:54:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Some PGs (0.003279) are inactive; try again later
Feb 01 14:54:17 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:17 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Feb 01 14:54:17 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Feb 01 14:54:17 compute-0 python3.9[103111]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Feb 01 14:54:17 compute-0 sudo[103109]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:18 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Feb 01 14:54:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Feb 01 14:54:18 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Feb 01 14:54:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Feb 01 14:54:18 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Feb 01 14:54:18 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=114/115 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:54:18 compute-0 ceph-mon[75179]: osdmap e114: 3 total, 3 up, 3 in
Feb 01 14:54:18 compute-0 ceph-mon[75179]: pgmap v225: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:18 compute-0 ceph-mon[75179]: 8.1e scrub starts
Feb 01 14:54:18 compute-0 ceph-mon[75179]: 8.1e scrub ok
Feb 01 14:54:18 compute-0 ceph-mon[75179]: osdmap e115: 3 total, 3 up, 3 in
Feb 01 14:54:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:54:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:54:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:54:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:54:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:54:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:54:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 14:54:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 14:54:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 14:54:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 14:54:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 14:54:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 14:54:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 14:54:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 14:54:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 14:54:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 14:54:18 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Feb 01 14:54:18 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Feb 01 14:54:18 compute-0 sudo[103261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sabdbtkskaqxorqmegtpmbjxglcdinho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957658.6329176-195-143002602432371/AnsiballZ_file.py'
Feb 01 14:54:18 compute-0 sudo[103261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:19 compute-0 python3.9[103263]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:54:19 compute-0 sudo[103261]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:19 compute-0 ceph-mon[75179]: 10.17 scrub starts
Feb 01 14:54:19 compute-0 ceph-mon[75179]: 10.17 scrub ok
Feb 01 14:54:19 compute-0 ceph-mon[75179]: 7.12 scrub starts
Feb 01 14:54:19 compute-0 ceph-mon[75179]: 7.12 scrub ok
Feb 01 14:54:19 compute-0 sudo[103413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svhywskgwruwjxwpanlkuwujewwaplwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957659.2956324-203-75849945747423/AnsiballZ_stat.py'
Feb 01 14:54:19 compute-0 sudo[103413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:19 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:19 compute-0 python3.9[103415]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:54:19 compute-0 sudo[103413]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:20 compute-0 sudo[103491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btovywpjtkodxpyyqbukxxbqkcnnswgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957659.2956324-203-75849945747423/AnsiballZ_file.py'
Feb 01 14:54:20 compute-0 sudo[103491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:20 compute-0 ceph-mon[75179]: pgmap v227: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:20 compute-0 python3.9[103493]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:54:20 compute-0 sudo[103491]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:54:20 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Feb 01 14:54:20 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Feb 01 14:54:21 compute-0 sudo[103643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znvlkiszdygoosnmmyjxvrpiutxeeqod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957660.8197584-224-156880107117279/AnsiballZ_stat.py'
Feb 01 14:54:21 compute-0 sudo[103643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:21 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Feb 01 14:54:21 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Feb 01 14:54:21 compute-0 ceph-mon[75179]: 5.19 scrub starts
Feb 01 14:54:21 compute-0 ceph-mon[75179]: 5.19 scrub ok
Feb 01 14:54:21 compute-0 python3.9[103645]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:54:21 compute-0 sudo[103643]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:21 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 0 objects/s recovering
Feb 01 14:54:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 01 14:54:21 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 01 14:54:22 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Feb 01 14:54:22 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Feb 01 14:54:22 compute-0 sudo[103797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofwgwuopbruoawsdvwvzahojusfrqozd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957661.7048686-237-279591347885006/AnsiballZ_getent.py'
Feb 01 14:54:22 compute-0 sudo[103797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Feb 01 14:54:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 01 14:54:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Feb 01 14:54:22 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Feb 01 14:54:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116 pruub=11.231036186s) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 active pruub 204.880294800s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:54:22 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116 pruub=11.230973244s) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 unknown NOTIFY pruub 204.880294800s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:54:22 compute-0 ceph-mon[75179]: 3.3 scrub starts
Feb 01 14:54:22 compute-0 ceph-mon[75179]: 3.3 scrub ok
Feb 01 14:54:22 compute-0 ceph-mon[75179]: pgmap v228: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 0 objects/s recovering
Feb 01 14:54:22 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 01 14:54:22 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:54:22 compute-0 python3.9[103799]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Feb 01 14:54:22 compute-0 sudo[103797]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:22 compute-0 sudo[103950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ochuggsqinhwcqtlphlayfmbfvfvpldp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957662.594698-247-86315843063628/AnsiballZ_getent.py'
Feb 01 14:54:22 compute-0 sudo[103950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:23 compute-0 python3.9[103952]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Feb 01 14:54:23 compute-0 sudo[103950]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:23 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Feb 01 14:54:23 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Feb 01 14:54:23 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Feb 01 14:54:23 compute-0 ceph-mon[75179]: 8.15 scrub starts
Feb 01 14:54:23 compute-0 ceph-mon[75179]: 8.15 scrub ok
Feb 01 14:54:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 01 14:54:23 compute-0 ceph-mon[75179]: osdmap e116: 3 total, 3 up, 3 in
Feb 01 14:54:23 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:54:23 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 14:54:23 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:54:23 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 14:54:23 compute-0 sudo[104103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khqqsetyvqffyoypniokjivuadvblnlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957663.2331839-255-81360139407939/AnsiballZ_group.py'
Feb 01 14:54:23 compute-0 sudo[104103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:23 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 0 objects/s recovering
Feb 01 14:54:23 compute-0 python3.9[104105]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 01 14:54:23 compute-0 sudo[104103]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:24 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Feb 01 14:54:24 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Feb 01 14:54:24 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Feb 01 14:54:24 compute-0 ceph-mon[75179]: osdmap e117: 3 total, 3 up, 3 in
Feb 01 14:54:24 compute-0 ceph-mon[75179]: pgmap v231: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 0 objects/s recovering
Feb 01 14:54:24 compute-0 sudo[104255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmadifkwcisqawlwgglfojhernouolcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957663.9976332-264-252950866667859/AnsiballZ_file.py'
Feb 01 14:54:24 compute-0 sudo[104255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:24 compute-0 python3.9[104257]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Feb 01 14:54:24 compute-0 sudo[104255]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:24 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Feb 01 14:54:24 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:54:24 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Feb 01 14:54:25 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Feb 01 14:54:25 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Feb 01 14:54:25 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Feb 01 14:54:25 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Feb 01 14:54:25 compute-0 sudo[104407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cndyovlwubhibcywuqlmpfnxmrmmovuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957664.8003733-275-28474207752016/AnsiballZ_dnf.py'
Feb 01 14:54:25 compute-0 sudo[104407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Feb 01 14:54:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Feb 01 14:54:25 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Feb 01 14:54:25 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:54:25 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 14:54:25 compute-0 ceph-mon[75179]: osdmap e118: 3 total, 3 up, 3 in
Feb 01 14:54:25 compute-0 ceph-mon[75179]: 5.18 scrub starts
Feb 01 14:54:25 compute-0 ceph-mon[75179]: 5.18 scrub ok
Feb 01 14:54:25 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119 pruub=15.578289032s) [1] async=[1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 active pruub 212.280242920s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 14:54:25 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119 pruub=15.578214645s) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY pruub 212.280242920s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 14:54:25 compute-0 python3.9[104409]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 14:54:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:54:25 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Feb 01 14:54:25 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Feb 01 14:54:25 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Feb 01 14:54:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Feb 01 14:54:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Feb 01 14:54:26 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Feb 01 14:54:26 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=119/120 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 14:54:26 compute-0 ceph-mon[75179]: 7.1a scrub starts
Feb 01 14:54:26 compute-0 ceph-mon[75179]: 7.1a scrub ok
Feb 01 14:54:26 compute-0 ceph-mon[75179]: 2.1f scrub starts
Feb 01 14:54:26 compute-0 ceph-mon[75179]: 2.1f scrub ok
Feb 01 14:54:26 compute-0 ceph-mon[75179]: osdmap e119: 3 total, 3 up, 3 in
Feb 01 14:54:26 compute-0 ceph-mon[75179]: pgmap v234: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Feb 01 14:54:26 compute-0 ceph-mon[75179]: 5.1a scrub starts
Feb 01 14:54:26 compute-0 ceph-mon[75179]: 5.1a scrub ok
Feb 01 14:54:26 compute-0 ceph-mon[75179]: osdmap e120: 3 total, 3 up, 3 in
Feb 01 14:54:26 compute-0 sudo[104407]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:26 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Feb 01 14:54:26 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Feb 01 14:54:26 compute-0 sudo[104560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqfixwdyflowgbyvqrqpsbqyaniwcnvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957666.5977352-283-77720068922625/AnsiballZ_file.py'
Feb 01 14:54:26 compute-0 sudo[104560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:27 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Feb 01 14:54:27 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Feb 01 14:54:27 compute-0 python3.9[104562]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:54:27 compute-0 sudo[104560]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:27 compute-0 ceph-mon[75179]: 5.1d scrub starts
Feb 01 14:54:27 compute-0 ceph-mon[75179]: 5.1d scrub ok
Feb 01 14:54:27 compute-0 sudo[104712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afkvrflzznpusxctthugwcsynhhejeun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957667.3015635-291-75231356910747/AnsiballZ_stat.py'
Feb 01 14:54:27 compute-0 sudo[104712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 1 objects/s recovering
Feb 01 14:54:27 compute-0 python3.9[104714]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:54:27 compute-0 sudo[104712]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1243115580546916e-06 of space, bias 4.0, pg target 0.00254917386966563 quantized to 16 (current 16)
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.260577423976037e-06 of space, bias 1.0, pg target 0.001278173227192811 quantized to 32 (current 32)
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:54:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 14:54:28 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Feb 01 14:54:28 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Feb 01 14:54:28 compute-0 sudo[104790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdqieuaydjcnjjaopprydvxkkmsrpkuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957667.3015635-291-75231356910747/AnsiballZ_file.py'
Feb 01 14:54:28 compute-0 sudo[104790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:28 compute-0 ceph-mon[75179]: 11.15 scrub starts
Feb 01 14:54:28 compute-0 ceph-mon[75179]: 11.15 scrub ok
Feb 01 14:54:28 compute-0 ceph-mon[75179]: pgmap v236: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 1 objects/s recovering
Feb 01 14:54:28 compute-0 python3.9[104792]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:54:28 compute-0 sudo[104790]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:28 compute-0 sudo[104942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aebwcageqchowrraknkujqxqszjmmlyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957668.6210842-304-73598745776565/AnsiballZ_stat.py'
Feb 01 14:54:28 compute-0 sudo[104942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:29 compute-0 python3.9[104944]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:54:29 compute-0 sudo[104942]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:29 compute-0 sudo[105020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzyquhtcqouksqdgowexbelcfliqzkms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957668.6210842-304-73598745776565/AnsiballZ_file.py'
Feb 01 14:54:29 compute-0 sudo[105020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:29 compute-0 ceph-mon[75179]: 4.18 scrub starts
Feb 01 14:54:29 compute-0 ceph-mon[75179]: 4.18 scrub ok
Feb 01 14:54:29 compute-0 python3.9[105022]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:54:29 compute-0 sudo[105020]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:29 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Feb 01 14:54:30 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Feb 01 14:54:30 compute-0 sudo[105172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhtvrzcfqvdxcdtimlbqyampwdmbhjmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957669.7692447-319-151263723134161/AnsiballZ_dnf.py'
Feb 01 14:54:30 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Feb 01 14:54:30 compute-0 sudo[105172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:30 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Feb 01 14:54:30 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Feb 01 14:54:30 compute-0 python3.9[105174]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 14:54:30 compute-0 ceph-mon[75179]: pgmap v237: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Feb 01 14:54:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:54:31 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Feb 01 14:54:31 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Feb 01 14:54:31 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Feb 01 14:54:31 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Feb 01 14:54:31 compute-0 ceph-mon[75179]: 5.5 scrub starts
Feb 01 14:54:31 compute-0 ceph-mon[75179]: 5.5 scrub ok
Feb 01 14:54:31 compute-0 ceph-mon[75179]: 4.1b scrub starts
Feb 01 14:54:31 compute-0 ceph-mon[75179]: 4.1b scrub ok
Feb 01 14:54:31 compute-0 sudo[105172]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:31 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Feb 01 14:54:32 compute-0 python3.9[105325]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:54:32 compute-0 ceph-mon[75179]: 2.2 scrub starts
Feb 01 14:54:32 compute-0 ceph-mon[75179]: 2.2 scrub ok
Feb 01 14:54:32 compute-0 ceph-mon[75179]: 11.3 scrub starts
Feb 01 14:54:32 compute-0 ceph-mon[75179]: 11.3 scrub ok
Feb 01 14:54:32 compute-0 ceph-mon[75179]: pgmap v238: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Feb 01 14:54:32 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Feb 01 14:54:32 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Feb 01 14:54:32 compute-0 python3.9[105477]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Feb 01 14:54:33 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Feb 01 14:54:33 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Feb 01 14:54:33 compute-0 ceph-mon[75179]: 10.13 scrub starts
Feb 01 14:54:33 compute-0 ceph-mon[75179]: 10.13 scrub ok
Feb 01 14:54:33 compute-0 python3.9[105627]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:54:33 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Feb 01 14:54:34 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Feb 01 14:54:34 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Feb 01 14:54:34 compute-0 ceph-mon[75179]: 4.1a scrub starts
Feb 01 14:54:34 compute-0 ceph-mon[75179]: 4.1a scrub ok
Feb 01 14:54:34 compute-0 ceph-mon[75179]: pgmap v239: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Feb 01 14:54:34 compute-0 sudo[105777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjlnqryrztzixnbfyjaxgvxwoiqgzbrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957673.9031482-360-165798590933297/AnsiballZ_systemd.py'
Feb 01 14:54:34 compute-0 sudo[105777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:34 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Feb 01 14:54:34 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Feb 01 14:54:34 compute-0 python3.9[105779]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 14:54:34 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb 01 14:54:34 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Feb 01 14:54:34 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb 01 14:54:34 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 01 14:54:35 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Feb 01 14:54:35 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Feb 01 14:54:35 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Feb 01 14:54:35 compute-0 sudo[105777]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:35 compute-0 ceph-mon[75179]: 10.7 scrub starts
Feb 01 14:54:35 compute-0 ceph-mon[75179]: 10.7 scrub ok
Feb 01 14:54:35 compute-0 ceph-mon[75179]: 10.10 scrub starts
Feb 01 14:54:35 compute-0 ceph-mon[75179]: 10.10 scrub ok
Feb 01 14:54:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:54:35 compute-0 python3.9[105941]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Feb 01 14:54:35 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:35 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Feb 01 14:54:35 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Feb 01 14:54:35 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.f scrub starts
Feb 01 14:54:35 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.f scrub ok
Feb 01 14:54:36 compute-0 ceph-mon[75179]: 8.11 scrub starts
Feb 01 14:54:36 compute-0 ceph-mon[75179]: 8.11 scrub ok
Feb 01 14:54:36 compute-0 ceph-mon[75179]: pgmap v240: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:36 compute-0 ceph-mon[75179]: 10.11 scrub starts
Feb 01 14:54:36 compute-0 ceph-mon[75179]: 10.11 scrub ok
Feb 01 14:54:37 compute-0 ceph-mon[75179]: 2.f scrub starts
Feb 01 14:54:37 compute-0 ceph-mon[75179]: 2.f scrub ok
Feb 01 14:54:37 compute-0 sudo[106091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evyhedfhxnfhmumlrrfcmpeylzhiqbqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957677.3383198-417-17926654512283/AnsiballZ_systemd.py'
Feb 01 14:54:37 compute-0 sudo[106091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:37 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:37 compute-0 python3.9[106093]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 14:54:38 compute-0 sudo[106091]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:38 compute-0 sudo[106245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsttylskigicczwbmggkyudaadewcalh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957678.1515908-417-171842511828885/AnsiballZ_systemd.py'
Feb 01 14:54:38 compute-0 sudo[106245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:38 compute-0 ceph-mon[75179]: pgmap v241: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:38 compute-0 python3.9[106247]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 14:54:38 compute-0 sudo[106245]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:39 compute-0 sshd-session[99542]: Connection closed by 192.168.122.30 port 37862
Feb 01 14:54:39 compute-0 sshd-session[99539]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:54:39 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Feb 01 14:54:39 compute-0 systemd[1]: session-34.scope: Consumed 1min 429ms CPU time.
Feb 01 14:54:39 compute-0 systemd-logind[786]: Session 34 logged out. Waiting for processes to exit.
Feb 01 14:54:39 compute-0 systemd-logind[786]: Removed session 34.
Feb 01 14:54:39 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Feb 01 14:54:39 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Feb 01 14:54:39 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:39 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Feb 01 14:54:39 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Feb 01 14:54:40 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Feb 01 14:54:40 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Feb 01 14:54:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:54:40 compute-0 ceph-mon[75179]: 5.1 scrub starts
Feb 01 14:54:40 compute-0 ceph-mon[75179]: 5.1 scrub ok
Feb 01 14:54:40 compute-0 ceph-mon[75179]: pgmap v242: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:40 compute-0 ceph-mon[75179]: 5.4 scrub starts
Feb 01 14:54:40 compute-0 ceph-mon[75179]: 5.4 scrub ok
Feb 01 14:54:40 compute-0 ceph-mon[75179]: 3.8 scrub starts
Feb 01 14:54:40 compute-0 ceph-mon[75179]: 3.8 scrub ok
Feb 01 14:54:41 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.c scrub starts
Feb 01 14:54:41 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.c scrub ok
Feb 01 14:54:41 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:41 compute-0 ceph-mon[75179]: 7.c scrub starts
Feb 01 14:54:41 compute-0 ceph-mon[75179]: 7.c scrub ok
Feb 01 14:54:42 compute-0 ceph-mon[75179]: pgmap v243: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:43 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Feb 01 14:54:43 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Feb 01 14:54:43 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:43 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Feb 01 14:54:43 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Feb 01 14:54:44 compute-0 sshd-session[106274]: Accepted publickey for zuul from 192.168.122.30 port 39036 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:54:44 compute-0 systemd-logind[786]: New session 35 of user zuul.
Feb 01 14:54:44 compute-0 systemd[1]: Started Session 35 of User zuul.
Feb 01 14:54:44 compute-0 sshd-session[106274]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:54:44 compute-0 ceph-mon[75179]: 2.6 scrub starts
Feb 01 14:54:44 compute-0 ceph-mon[75179]: 2.6 scrub ok
Feb 01 14:54:44 compute-0 ceph-mon[75179]: pgmap v244: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:44 compute-0 ceph-mon[75179]: 7.18 scrub starts
Feb 01 14:54:44 compute-0 ceph-mon[75179]: 7.18 scrub ok
Feb 01 14:54:45 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Feb 01 14:54:45 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Feb 01 14:54:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:54:45 compute-0 python3.9[106427]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:54:45 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:45 compute-0 ceph-mon[75179]: 11.14 scrub starts
Feb 01 14:54:45 compute-0 ceph-mon[75179]: 11.14 scrub ok
Feb 01 14:54:46 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.d scrub starts
Feb 01 14:54:46 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.d scrub ok
Feb 01 14:54:46 compute-0 sudo[106581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fscwqbzswkxscnqvaaqpkplztpagpwmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957685.9745336-31-131635120110426/AnsiballZ_getent.py'
Feb 01 14:54:46 compute-0 sudo[106581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:46 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.f scrub starts
Feb 01 14:54:46 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.f scrub ok
Feb 01 14:54:46 compute-0 python3.9[106583]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Feb 01 14:54:46 compute-0 sudo[106581]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:46 compute-0 ceph-mon[75179]: pgmap v245: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:46 compute-0 ceph-mon[75179]: 11.d scrub starts
Feb 01 14:54:46 compute-0 ceph-mon[75179]: 11.d scrub ok
Feb 01 14:54:47 compute-0 sudo[106734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekdggplhmxkqpazodmxjghcdddqvhsyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957687.04575-43-160998336048771/AnsiballZ_setup.py'
Feb 01 14:54:47 compute-0 sudo[106734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:47 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Feb 01 14:54:47 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Feb 01 14:54:47 compute-0 python3.9[106736]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 01 14:54:47 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:47 compute-0 sudo[106734]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:47 compute-0 ceph-mon[75179]: 10.f scrub starts
Feb 01 14:54:47 compute-0 ceph-mon[75179]: 10.f scrub ok
Feb 01 14:54:48 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Feb 01 14:54:48 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Feb 01 14:54:48 compute-0 sudo[106818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udqnyhnvaugffpkjewqcpcighpczmevn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957687.04575-43-160998336048771/AnsiballZ_dnf.py'
Feb 01 14:54:48 compute-0 sudo[106818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:54:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:54:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:54:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:54:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:54:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:54:48 compute-0 python3.9[106820]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 01 14:54:48 compute-0 ceph-mon[75179]: 4.2 scrub starts
Feb 01 14:54:48 compute-0 ceph-mon[75179]: 4.2 scrub ok
Feb 01 14:54:48 compute-0 ceph-mon[75179]: pgmap v246: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:48 compute-0 ceph-mon[75179]: 3.1 scrub starts
Feb 01 14:54:48 compute-0 ceph-mon[75179]: 3.1 scrub ok
Feb 01 14:54:49 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Feb 01 14:54:49 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Feb 01 14:54:49 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Feb 01 14:54:49 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Feb 01 14:54:49 compute-0 sudo[106818]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:49 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:49 compute-0 ceph-mon[75179]: 10.8 scrub starts
Feb 01 14:54:49 compute-0 ceph-mon[75179]: 10.8 scrub ok
Feb 01 14:54:50 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Feb 01 14:54:50 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Feb 01 14:54:50 compute-0 sudo[106971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glrmjrvpcbstuxqcukjvrdgaplrxgfxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957689.8821967-57-173595678168141/AnsiballZ_dnf.py'
Feb 01 14:54:50 compute-0 sudo[106971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:50 compute-0 python3.9[106973]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 14:54:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:54:50 compute-0 ceph-mon[75179]: 2.7 scrub starts
Feb 01 14:54:50 compute-0 ceph-mon[75179]: 2.7 scrub ok
Feb 01 14:54:50 compute-0 ceph-mon[75179]: pgmap v247: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:50 compute-0 ceph-mon[75179]: 5.7 scrub starts
Feb 01 14:54:50 compute-0 ceph-mon[75179]: 5.7 scrub ok
Feb 01 14:54:51 compute-0 sudo[106971]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:51 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:52 compute-0 sudo[107124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybxtfyfairkongkefqrrzqkifrcdyklr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957691.7199962-65-184963453888389/AnsiballZ_systemd.py'
Feb 01 14:54:52 compute-0 sudo[107124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:52 compute-0 python3.9[107126]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 01 14:54:52 compute-0 sudo[107124]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:52 compute-0 ceph-mon[75179]: pgmap v248: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:53 compute-0 python3.9[107279]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:54:53 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:54 compute-0 sudo[107429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnumryzhqrpnikqkovfeyiqdtwyjmwka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957693.7906022-83-88170758168679/AnsiballZ_sefcontext.py'
Feb 01 14:54:54 compute-0 sudo[107429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:54 compute-0 python3.9[107431]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Feb 01 14:54:54 compute-0 sudo[107429]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:54 compute-0 ceph-mon[75179]: pgmap v249: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:54:55 compute-0 python3.9[107581]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:54:55 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:56 compute-0 sudo[107737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huzgutatkolwklfirzqgzivufzpjwnxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957695.8908365-101-9172810127820/AnsiballZ_dnf.py'
Feb 01 14:54:56 compute-0 sudo[107737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:56 compute-0 python3.9[107739]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 14:54:56 compute-0 ceph-mon[75179]: pgmap v250: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:57 compute-0 sudo[107737]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:57 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:58 compute-0 sudo[107890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yugoxdzydjfbkhoxdfknnflgyyfloqas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957697.6909404-109-95141216403721/AnsiballZ_command.py'
Feb 01 14:54:58 compute-0 sudo[107890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:58 compute-0 python3.9[107892]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:54:58 compute-0 ceph-mon[75179]: pgmap v251: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:58 compute-0 sudo[107890]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:59 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Feb 01 14:54:59 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Feb 01 14:54:59 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Feb 01 14:54:59 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Feb 01 14:54:59 compute-0 sudo[108177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdipwfvswvlqmhtgbpkvbppydijdofzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957699.1314857-117-270879519960058/AnsiballZ_file.py'
Feb 01 14:54:59 compute-0 sudo[108177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:54:59 compute-0 python3.9[108179]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Feb 01 14:54:59 compute-0 sudo[108177]: pam_unix(sudo:session): session closed for user root
Feb 01 14:54:59 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:54:59 compute-0 ceph-mon[75179]: 3.7 scrub starts
Feb 01 14:54:59 compute-0 ceph-mon[75179]: 3.7 scrub ok
Feb 01 14:54:59 compute-0 ceph-mon[75179]: 2.4 scrub starts
Feb 01 14:54:59 compute-0 ceph-mon[75179]: 2.4 scrub ok
Feb 01 14:55:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:55:00 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.f scrub starts
Feb 01 14:55:00 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.f scrub ok
Feb 01 14:55:00 compute-0 python3.9[108329]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:55:00 compute-0 sudo[108481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgmwdaipajbcmvjswjegpfyqkoqlugtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957700.7177503-133-121143025410575/AnsiballZ_dnf.py'
Feb 01 14:55:00 compute-0 sudo[108481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:01 compute-0 ceph-mon[75179]: pgmap v252: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:01 compute-0 ceph-mon[75179]: 4.f scrub starts
Feb 01 14:55:01 compute-0 ceph-mon[75179]: 4.f scrub ok
Feb 01 14:55:01 compute-0 python3.9[108483]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 14:55:01 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.a scrub starts
Feb 01 14:55:01 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.a scrub ok
Feb 01 14:55:01 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:01 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Feb 01 14:55:01 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Feb 01 14:55:02 compute-0 ceph-mon[75179]: 2.a scrub starts
Feb 01 14:55:02 compute-0 ceph-mon[75179]: 2.a scrub ok
Feb 01 14:55:02 compute-0 sudo[108481]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:02 compute-0 sudo[108634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lemnthozambkhttnrhojinxoelelzbvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957702.4482524-142-110829223244171/AnsiballZ_dnf.py'
Feb 01 14:55:02 compute-0 sudo[108634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:02 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.b scrub starts
Feb 01 14:55:02 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.b scrub ok
Feb 01 14:55:02 compute-0 python3.9[108636]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 14:55:03 compute-0 ceph-mon[75179]: pgmap v253: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:03 compute-0 ceph-mon[75179]: 3.5 scrub starts
Feb 01 14:55:03 compute-0 ceph-mon[75179]: 3.5 scrub ok
Feb 01 14:55:03 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:03 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Feb 01 14:55:03 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Feb 01 14:55:04 compute-0 ceph-mon[75179]: 8.b scrub starts
Feb 01 14:55:04 compute-0 ceph-mon[75179]: 8.b scrub ok
Feb 01 14:55:04 compute-0 sudo[108634]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:04 compute-0 sudo[108787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwtgzhxgteakxzyhjozemqdvwxngcdwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957704.4185493-154-84546883272303/AnsiballZ_stat.py'
Feb 01 14:55:04 compute-0 sudo[108787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:04 compute-0 python3.9[108789]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:55:04 compute-0 sudo[108787]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:05 compute-0 ceph-mon[75179]: pgmap v254: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:05 compute-0 ceph-mon[75179]: 8.10 scrub starts
Feb 01 14:55:05 compute-0 ceph-mon[75179]: 8.10 scrub ok
Feb 01 14:55:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:55:05 compute-0 sudo[108941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkngjkpuaasfskpkraizoraylqmwtmkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957705.0358202-162-192913566320184/AnsiballZ_slurp.py'
Feb 01 14:55:05 compute-0 sudo[108941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:05 compute-0 python3.9[108943]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Feb 01 14:55:05 compute-0 sudo[108941]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:05 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:06 compute-0 sshd-session[106277]: Connection closed by 192.168.122.30 port 39036
Feb 01 14:55:06 compute-0 sshd-session[106274]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:55:06 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Feb 01 14:55:06 compute-0 systemd[1]: session-35.scope: Consumed 16.701s CPU time.
Feb 01 14:55:06 compute-0 systemd-logind[786]: Session 35 logged out. Waiting for processes to exit.
Feb 01 14:55:06 compute-0 systemd-logind[786]: Removed session 35.
Feb 01 14:55:07 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Feb 01 14:55:07 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Feb 01 14:55:07 compute-0 ceph-mon[75179]: pgmap v255: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:07 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.d scrub starts
Feb 01 14:55:07 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.d scrub ok
Feb 01 14:55:07 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:07 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Feb 01 14:55:07 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Feb 01 14:55:08 compute-0 ceph-mon[75179]: 7.1 scrub starts
Feb 01 14:55:08 compute-0 ceph-mon[75179]: 7.1 scrub ok
Feb 01 14:55:08 compute-0 ceph-mon[75179]: 4.d scrub starts
Feb 01 14:55:08 compute-0 ceph-mon[75179]: 4.d scrub ok
Feb 01 14:55:09 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.b scrub starts
Feb 01 14:55:09 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.b scrub ok
Feb 01 14:55:09 compute-0 ceph-mon[75179]: pgmap v256: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:09 compute-0 ceph-mon[75179]: 7.1f scrub starts
Feb 01 14:55:09 compute-0 ceph-mon[75179]: 7.1f scrub ok
Feb 01 14:55:09 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:09 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Feb 01 14:55:09 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Feb 01 14:55:10 compute-0 ceph-mon[75179]: 11.b scrub starts
Feb 01 14:55:10 compute-0 ceph-mon[75179]: 11.b scrub ok
Feb 01 14:55:10 compute-0 sudo[108968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:55:10 compute-0 sudo[108968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:55:10 compute-0 sudo[108968]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:10 compute-0 sudo[108993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 14:55:10 compute-0 sudo[108993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:55:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:55:10 compute-0 sudo[108993]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:55:10 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:55:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 14:55:10 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:55:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 14:55:10 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:55:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 14:55:10 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:55:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 14:55:10 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:55:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:55:10 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:55:10 compute-0 sudo[109049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:55:10 compute-0 sudo[109049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:55:10 compute-0 sudo[109049]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:10 compute-0 sudo[109074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 14:55:10 compute-0 sudo[109074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:55:10 compute-0 podman[109111]: 2026-02-01 14:55:10.894406862 +0000 UTC m=+0.037138476 container create 11ec899b21563bb5f5feaddf9a841e4394f38390bb0ebd677ae3e8c2741e31c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:55:10 compute-0 systemd[1]: Started libpod-conmon-11ec899b21563bb5f5feaddf9a841e4394f38390bb0ebd677ae3e8c2741e31c8.scope.
Feb 01 14:55:10 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:55:10 compute-0 podman[109111]: 2026-02-01 14:55:10.965760335 +0000 UTC m=+0.108491999 container init 11ec899b21563bb5f5feaddf9a841e4394f38390bb0ebd677ae3e8c2741e31c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_yalow, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Feb 01 14:55:10 compute-0 podman[109111]: 2026-02-01 14:55:10.971354134 +0000 UTC m=+0.114085778 container start 11ec899b21563bb5f5feaddf9a841e4394f38390bb0ebd677ae3e8c2741e31c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_yalow, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:55:10 compute-0 podman[109111]: 2026-02-01 14:55:10.974413904 +0000 UTC m=+0.117145558 container attach 11ec899b21563bb5f5feaddf9a841e4394f38390bb0ebd677ae3e8c2741e31c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_yalow, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 01 14:55:10 compute-0 boring_yalow[109128]: 167 167
Feb 01 14:55:10 compute-0 systemd[1]: libpod-11ec899b21563bb5f5feaddf9a841e4394f38390bb0ebd677ae3e8c2741e31c8.scope: Deactivated successfully.
Feb 01 14:55:10 compute-0 podman[109111]: 2026-02-01 14:55:10.879411527 +0000 UTC m=+0.022143171 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:55:10 compute-0 podman[109111]: 2026-02-01 14:55:10.976941293 +0000 UTC m=+0.119672947 container died 11ec899b21563bb5f5feaddf9a841e4394f38390bb0ebd677ae3e8c2741e31c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 01 14:55:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b6b37eb680aeefc885fa3a227901fcd92db82d913c737883cbf0ff83a32e479-merged.mount: Deactivated successfully.
Feb 01 14:55:11 compute-0 podman[109111]: 2026-02-01 14:55:11.019088043 +0000 UTC m=+0.161819687 container remove 11ec899b21563bb5f5feaddf9a841e4394f38390bb0ebd677ae3e8c2741e31c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 01 14:55:11 compute-0 systemd[1]: libpod-conmon-11ec899b21563bb5f5feaddf9a841e4394f38390bb0ebd677ae3e8c2741e31c8.scope: Deactivated successfully.
Feb 01 14:55:11 compute-0 ceph-mon[75179]: pgmap v257: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:11 compute-0 ceph-mon[75179]: 7.4 scrub starts
Feb 01 14:55:11 compute-0 ceph-mon[75179]: 7.4 scrub ok
Feb 01 14:55:11 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:55:11 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:55:11 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:55:11 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:55:11 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:55:11 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:55:11 compute-0 podman[109152]: 2026-02-01 14:55:11.180967681 +0000 UTC m=+0.044824093 container create 506cf8f7159d2e55188d6e57ff85492f5f30f276305567e51fb1192aed18cb67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 01 14:55:11 compute-0 systemd[1]: Started libpod-conmon-506cf8f7159d2e55188d6e57ff85492f5f30f276305567e51fb1192aed18cb67.scope.
Feb 01 14:55:11 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fec2bdd3771e177a70f953ba16819b51e2ac696c5f8291b2abdaf26f3b352f16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fec2bdd3771e177a70f953ba16819b51e2ac696c5f8291b2abdaf26f3b352f16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fec2bdd3771e177a70f953ba16819b51e2ac696c5f8291b2abdaf26f3b352f16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fec2bdd3771e177a70f953ba16819b51e2ac696c5f8291b2abdaf26f3b352f16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fec2bdd3771e177a70f953ba16819b51e2ac696c5f8291b2abdaf26f3b352f16/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:55:11 compute-0 podman[109152]: 2026-02-01 14:55:11.159327193 +0000 UTC m=+0.023183645 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:55:11 compute-0 podman[109152]: 2026-02-01 14:55:11.268285442 +0000 UTC m=+0.132141904 container init 506cf8f7159d2e55188d6e57ff85492f5f30f276305567e51fb1192aed18cb67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 01 14:55:11 compute-0 podman[109152]: 2026-02-01 14:55:11.274226489 +0000 UTC m=+0.138082901 container start 506cf8f7159d2e55188d6e57ff85492f5f30f276305567e51fb1192aed18cb67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 01 14:55:11 compute-0 podman[109152]: 2026-02-01 14:55:11.279231584 +0000 UTC m=+0.143087996 container attach 506cf8f7159d2e55188d6e57ff85492f5f30f276305567e51fb1192aed18cb67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:55:11 compute-0 sshd-session[109176]: Accepted publickey for zuul from 192.168.122.30 port 40468 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:55:11 compute-0 systemd-logind[786]: New session 36 of user zuul.
Feb 01 14:55:11 compute-0 systemd[1]: Started Session 36 of User zuul.
Feb 01 14:55:11 compute-0 sshd-session[109176]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:55:11 compute-0 jovial_jackson[109169]: --> passed data devices: 0 physical, 3 LVM
Feb 01 14:55:11 compute-0 jovial_jackson[109169]: --> All data devices are unavailable
Feb 01 14:55:11 compute-0 systemd[1]: libpod-506cf8f7159d2e55188d6e57ff85492f5f30f276305567e51fb1192aed18cb67.scope: Deactivated successfully.
Feb 01 14:55:11 compute-0 podman[109152]: 2026-02-01 14:55:11.67945126 +0000 UTC m=+0.543307692 container died 506cf8f7159d2e55188d6e57ff85492f5f30f276305567e51fb1192aed18cb67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:55:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-fec2bdd3771e177a70f953ba16819b51e2ac696c5f8291b2abdaf26f3b352f16-merged.mount: Deactivated successfully.
Feb 01 14:55:11 compute-0 podman[109152]: 2026-02-01 14:55:11.736501974 +0000 UTC m=+0.600358376 container remove 506cf8f7159d2e55188d6e57ff85492f5f30f276305567e51fb1192aed18cb67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jackson, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:55:11 compute-0 systemd[1]: libpod-conmon-506cf8f7159d2e55188d6e57ff85492f5f30f276305567e51fb1192aed18cb67.scope: Deactivated successfully.
Feb 01 14:55:11 compute-0 sudo[109074]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:11 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:11 compute-0 sudo[109257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:55:11 compute-0 sudo[109257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:55:11 compute-0 sudo[109257]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:11 compute-0 sudo[109282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 14:55:11 compute-0 sudo[109282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:55:12 compute-0 podman[109320]: 2026-02-01 14:55:12.151094301 +0000 UTC m=+0.035055048 container create 9c5aad5b20e2bd6e88ea08be23a279e8d3e7555102aa70ae09d0ed1dcdfc2adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 01 14:55:12 compute-0 systemd[1]: Started libpod-conmon-9c5aad5b20e2bd6e88ea08be23a279e8d3e7555102aa70ae09d0ed1dcdfc2adb.scope.
Feb 01 14:55:12 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:55:12 compute-0 podman[109320]: 2026-02-01 14:55:12.22617595 +0000 UTC m=+0.110136717 container init 9c5aad5b20e2bd6e88ea08be23a279e8d3e7555102aa70ae09d0ed1dcdfc2adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_brahmagupta, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 01 14:55:12 compute-0 podman[109320]: 2026-02-01 14:55:12.231643226 +0000 UTC m=+0.115603973 container start 9c5aad5b20e2bd6e88ea08be23a279e8d3e7555102aa70ae09d0ed1dcdfc2adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 01 14:55:12 compute-0 busy_brahmagupta[109366]: 167 167
Feb 01 14:55:12 compute-0 podman[109320]: 2026-02-01 14:55:12.137242462 +0000 UTC m=+0.021203229 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:55:12 compute-0 systemd[1]: libpod-9c5aad5b20e2bd6e88ea08be23a279e8d3e7555102aa70ae09d0ed1dcdfc2adb.scope: Deactivated successfully.
Feb 01 14:55:12 compute-0 podman[109320]: 2026-02-01 14:55:12.235448414 +0000 UTC m=+0.119409191 container attach 9c5aad5b20e2bd6e88ea08be23a279e8d3e7555102aa70ae09d0ed1dcdfc2adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_brahmagupta, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:55:12 compute-0 podman[109320]: 2026-02-01 14:55:12.236339694 +0000 UTC m=+0.120300461 container died 9c5aad5b20e2bd6e88ea08be23a279e8d3e7555102aa70ae09d0ed1dcdfc2adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 01 14:55:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-8199746a437db738c2ae41b8f196bff639527a9e1912001c16afe7c92656fda0-merged.mount: Deactivated successfully.
Feb 01 14:55:12 compute-0 podman[109320]: 2026-02-01 14:55:12.269891927 +0000 UTC m=+0.153852674 container remove 9c5aad5b20e2bd6e88ea08be23a279e8d3e7555102aa70ae09d0ed1dcdfc2adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Feb 01 14:55:12 compute-0 systemd[1]: libpod-conmon-9c5aad5b20e2bd6e88ea08be23a279e8d3e7555102aa70ae09d0ed1dcdfc2adb.scope: Deactivated successfully.
Feb 01 14:55:12 compute-0 podman[109460]: 2026-02-01 14:55:12.385040809 +0000 UTC m=+0.030250368 container create 2863fe36a9c3bd6b0642f0aa851d0048fbc5f779e2c6e866c8f75d65387c1528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:55:12 compute-0 systemd[1]: Started libpod-conmon-2863fe36a9c3bd6b0642f0aa851d0048fbc5f779e2c6e866c8f75d65387c1528.scope.
Feb 01 14:55:12 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:55:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84456523c343a859ba8f46e64bad72068fe09741f44d009fc894f33f45d85df2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:55:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84456523c343a859ba8f46e64bad72068fe09741f44d009fc894f33f45d85df2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:55:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84456523c343a859ba8f46e64bad72068fe09741f44d009fc894f33f45d85df2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:55:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84456523c343a859ba8f46e64bad72068fe09741f44d009fc894f33f45d85df2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:55:12 compute-0 podman[109460]: 2026-02-01 14:55:12.457262282 +0000 UTC m=+0.102471841 container init 2863fe36a9c3bd6b0642f0aa851d0048fbc5f779e2c6e866c8f75d65387c1528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:55:12 compute-0 podman[109460]: 2026-02-01 14:55:12.462003941 +0000 UTC m=+0.107213500 container start 2863fe36a9c3bd6b0642f0aa851d0048fbc5f779e2c6e866c8f75d65387c1528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:55:12 compute-0 podman[109460]: 2026-02-01 14:55:12.370953444 +0000 UTC m=+0.016163023 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:55:12 compute-0 podman[109460]: 2026-02-01 14:55:12.476585437 +0000 UTC m=+0.121795016 container attach 2863fe36a9c3bd6b0642f0aa851d0048fbc5f779e2c6e866c8f75d65387c1528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_easley, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:55:12 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Feb 01 14:55:12 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Feb 01 14:55:12 compute-0 python3.9[109454]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:55:12 compute-0 recursing_easley[109477]: {
Feb 01 14:55:12 compute-0 recursing_easley[109477]:     "0": [
Feb 01 14:55:12 compute-0 recursing_easley[109477]:         {
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "devices": [
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "/dev/loop3"
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             ],
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "lv_name": "ceph_lv0",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "lv_size": "21470642176",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "name": "ceph_lv0",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "tags": {
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.cluster_name": "ceph",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.crush_device_class": "",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.encrypted": "0",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.objectstore": "bluestore",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.osd_id": "0",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.type": "block",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.vdo": "0",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.with_tpm": "0"
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             },
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "type": "block",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "vg_name": "ceph_vg0"
Feb 01 14:55:12 compute-0 recursing_easley[109477]:         }
Feb 01 14:55:12 compute-0 recursing_easley[109477]:     ],
Feb 01 14:55:12 compute-0 recursing_easley[109477]:     "1": [
Feb 01 14:55:12 compute-0 recursing_easley[109477]:         {
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "devices": [
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "/dev/loop4"
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             ],
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "lv_name": "ceph_lv1",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "lv_size": "21470642176",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "name": "ceph_lv1",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "tags": {
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.cluster_name": "ceph",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.crush_device_class": "",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.encrypted": "0",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.objectstore": "bluestore",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.osd_id": "1",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.type": "block",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.vdo": "0",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.with_tpm": "0"
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             },
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "type": "block",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "vg_name": "ceph_vg1"
Feb 01 14:55:12 compute-0 recursing_easley[109477]:         }
Feb 01 14:55:12 compute-0 recursing_easley[109477]:     ],
Feb 01 14:55:12 compute-0 recursing_easley[109477]:     "2": [
Feb 01 14:55:12 compute-0 recursing_easley[109477]:         {
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "devices": [
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "/dev/loop5"
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             ],
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "lv_name": "ceph_lv2",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "lv_size": "21470642176",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "name": "ceph_lv2",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "tags": {
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.cluster_name": "ceph",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.crush_device_class": "",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.encrypted": "0",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.objectstore": "bluestore",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.osd_id": "2",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.type": "block",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.vdo": "0",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:                 "ceph.with_tpm": "0"
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             },
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "type": "block",
Feb 01 14:55:12 compute-0 recursing_easley[109477]:             "vg_name": "ceph_vg2"
Feb 01 14:55:12 compute-0 recursing_easley[109477]:         }
Feb 01 14:55:12 compute-0 recursing_easley[109477]:     ]
Feb 01 14:55:12 compute-0 recursing_easley[109477]: }
Feb 01 14:55:12 compute-0 podman[109460]: 2026-02-01 14:55:12.764664141 +0000 UTC m=+0.409873700 container died 2863fe36a9c3bd6b0642f0aa851d0048fbc5f779e2c6e866c8f75d65387c1528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 01 14:55:12 compute-0 systemd[1]: libpod-2863fe36a9c3bd6b0642f0aa851d0048fbc5f779e2c6e866c8f75d65387c1528.scope: Deactivated successfully.
Feb 01 14:55:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-84456523c343a859ba8f46e64bad72068fe09741f44d009fc894f33f45d85df2-merged.mount: Deactivated successfully.
Feb 01 14:55:12 compute-0 podman[109460]: 2026-02-01 14:55:12.809117964 +0000 UTC m=+0.454327533 container remove 2863fe36a9c3bd6b0642f0aa851d0048fbc5f779e2c6e866c8f75d65387c1528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_easley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Feb 01 14:55:12 compute-0 systemd[1]: libpod-conmon-2863fe36a9c3bd6b0642f0aa851d0048fbc5f779e2c6e866c8f75d65387c1528.scope: Deactivated successfully.
Feb 01 14:55:12 compute-0 sudo[109282]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:12 compute-0 sudo[109516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:55:12 compute-0 sudo[109516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:55:12 compute-0 sudo[109516]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:12 compute-0 sudo[109552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 14:55:12 compute-0 sudo[109552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:55:13 compute-0 ceph-mon[75179]: pgmap v258: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:13 compute-0 ceph-mon[75179]: 10.2 scrub starts
Feb 01 14:55:13 compute-0 ceph-mon[75179]: 10.2 scrub ok
Feb 01 14:55:13 compute-0 podman[109664]: 2026-02-01 14:55:13.158202153 +0000 UTC m=+0.037672968 container create b4323aad3b6d1e1bb5aaae4260831fb18eee38a4cf7b64e911e859a56a0fadca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 01 14:55:13 compute-0 systemd[1]: Started libpod-conmon-b4323aad3b6d1e1bb5aaae4260831fb18eee38a4cf7b64e911e859a56a0fadca.scope.
Feb 01 14:55:13 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:55:13 compute-0 podman[109664]: 2026-02-01 14:55:13.20928136 +0000 UTC m=+0.088752235 container init b4323aad3b6d1e1bb5aaae4260831fb18eee38a4cf7b64e911e859a56a0fadca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_yalow, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:55:13 compute-0 podman[109664]: 2026-02-01 14:55:13.214125371 +0000 UTC m=+0.093596186 container start b4323aad3b6d1e1bb5aaae4260831fb18eee38a4cf7b64e911e859a56a0fadca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_yalow, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:55:13 compute-0 busy_yalow[109730]: 167 167
Feb 01 14:55:13 compute-0 systemd[1]: libpod-b4323aad3b6d1e1bb5aaae4260831fb18eee38a4cf7b64e911e859a56a0fadca.scope: Deactivated successfully.
Feb 01 14:55:13 compute-0 podman[109664]: 2026-02-01 14:55:13.217164131 +0000 UTC m=+0.096634966 container attach b4323aad3b6d1e1bb5aaae4260831fb18eee38a4cf7b64e911e859a56a0fadca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_yalow, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 01 14:55:13 compute-0 conmon[109730]: conmon b4323aad3b6d1e1bb5aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b4323aad3b6d1e1bb5aaae4260831fb18eee38a4cf7b64e911e859a56a0fadca.scope/container/memory.events
Feb 01 14:55:13 compute-0 podman[109664]: 2026-02-01 14:55:13.219756891 +0000 UTC m=+0.099227716 container died b4323aad3b6d1e1bb5aaae4260831fb18eee38a4cf7b64e911e859a56a0fadca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:55:13 compute-0 podman[109664]: 2026-02-01 14:55:13.137462946 +0000 UTC m=+0.016933781 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:55:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1254f6b635cada57abec067d12324479992d2c587720476d59341debac36f863-merged.mount: Deactivated successfully.
Feb 01 14:55:13 compute-0 podman[109664]: 2026-02-01 14:55:13.253030127 +0000 UTC m=+0.132500952 container remove b4323aad3b6d1e1bb5aaae4260831fb18eee38a4cf7b64e911e859a56a0fadca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 01 14:55:13 compute-0 systemd[1]: libpod-conmon-b4323aad3b6d1e1bb5aaae4260831fb18eee38a4cf7b64e911e859a56a0fadca.scope: Deactivated successfully.
Feb 01 14:55:13 compute-0 podman[109756]: 2026-02-01 14:55:13.38779218 +0000 UTC m=+0.049454850 container create 61787ccfe556c8fb532d1444d8e385d59a8d96e660daa7f34d61a4a4a7f993f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 01 14:55:13 compute-0 systemd[1]: Started libpod-conmon-61787ccfe556c8fb532d1444d8e385d59a8d96e660daa7f34d61a4a4a7f993f4.scope.
Feb 01 14:55:13 compute-0 podman[109756]: 2026-02-01 14:55:13.368317352 +0000 UTC m=+0.029979992 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:55:13 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:55:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c112a19723c45c2776f25d35e30114f8645381804530af82e4b47b434ec91b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:55:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c112a19723c45c2776f25d35e30114f8645381804530af82e4b47b434ec91b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:55:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c112a19723c45c2776f25d35e30114f8645381804530af82e4b47b434ec91b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:55:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c112a19723c45c2776f25d35e30114f8645381804530af82e4b47b434ec91b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:55:13 compute-0 python3.9[109732]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 01 14:55:13 compute-0 podman[109756]: 2026-02-01 14:55:13.501733754 +0000 UTC m=+0.163396434 container init 61787ccfe556c8fb532d1444d8e385d59a8d96e660daa7f34d61a4a4a7f993f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 01 14:55:13 compute-0 podman[109756]: 2026-02-01 14:55:13.50848968 +0000 UTC m=+0.170152310 container start 61787ccfe556c8fb532d1444d8e385d59a8d96e660daa7f34d61a4a4a7f993f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 01 14:55:13 compute-0 podman[109756]: 2026-02-01 14:55:13.512381989 +0000 UTC m=+0.174044619 container attach 61787ccfe556c8fb532d1444d8e385d59a8d96e660daa7f34d61a4a4a7f993f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 01 14:55:13 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:14 compute-0 lvm[109919]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 14:55:14 compute-0 lvm[109919]: VG ceph_vg1 finished
Feb 01 14:55:14 compute-0 lvm[109918]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 14:55:14 compute-0 lvm[109918]: VG ceph_vg0 finished
Feb 01 14:55:14 compute-0 lvm[109921]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 14:55:14 compute-0 lvm[109921]: VG ceph_vg2 finished
Feb 01 14:55:14 compute-0 nostalgic_mclean[109773]: {}
Feb 01 14:55:14 compute-0 systemd[1]: libpod-61787ccfe556c8fb532d1444d8e385d59a8d96e660daa7f34d61a4a4a7f993f4.scope: Deactivated successfully.
Feb 01 14:55:14 compute-0 systemd[1]: libpod-61787ccfe556c8fb532d1444d8e385d59a8d96e660daa7f34d61a4a4a7f993f4.scope: Consumed 1.081s CPU time.
Feb 01 14:55:14 compute-0 podman[109756]: 2026-02-01 14:55:14.26806425 +0000 UTC m=+0.929726880 container died 61787ccfe556c8fb532d1444d8e385d59a8d96e660daa7f34d61a4a4a7f993f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:55:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-23c112a19723c45c2776f25d35e30114f8645381804530af82e4b47b434ec91b-merged.mount: Deactivated successfully.
Feb 01 14:55:14 compute-0 podman[109756]: 2026-02-01 14:55:14.319942125 +0000 UTC m=+0.981604745 container remove 61787ccfe556c8fb532d1444d8e385d59a8d96e660daa7f34d61a4a4a7f993f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:55:14 compute-0 systemd[1]: libpod-conmon-61787ccfe556c8fb532d1444d8e385d59a8d96e660daa7f34d61a4a4a7f993f4.scope: Deactivated successfully.
Feb 01 14:55:14 compute-0 sudo[109552]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:14 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:55:14 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:55:14 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:55:14 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:55:14 compute-0 sudo[109989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 14:55:14 compute-0 sudo[109989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:55:14 compute-0 sudo[109989]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:14 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.c scrub starts
Feb 01 14:55:14 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.c scrub ok
Feb 01 14:55:14 compute-0 python3.9[110087]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:55:15 compute-0 ceph-mon[75179]: pgmap v259: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:55:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:55:15 compute-0 ceph-mon[75179]: 5.c scrub starts
Feb 01 14:55:15 compute-0 ceph-mon[75179]: 5.c scrub ok
Feb 01 14:55:15 compute-0 sshd-session[109189]: Connection closed by 192.168.122.30 port 40468
Feb 01 14:55:15 compute-0 sshd-session[109176]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:55:15 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Feb 01 14:55:15 compute-0 systemd[1]: session-36.scope: Consumed 1.956s CPU time.
Feb 01 14:55:15 compute-0 systemd-logind[786]: Session 36 logged out. Waiting for processes to exit.
Feb 01 14:55:15 compute-0 systemd-logind[786]: Removed session 36.
Feb 01 14:55:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:55:15 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:16 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Feb 01 14:55:16 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Feb 01 14:55:17 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.e scrub starts
Feb 01 14:55:17 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.e scrub ok
Feb 01 14:55:17 compute-0 ceph-mon[75179]: pgmap v260: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_14:55:17
Feb 01 14:55:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 14:55:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 14:55:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'volumes', '.rgw.root', 'default.rgw.meta', '.mgr', 'images', 'vms', 'default.rgw.control']
Feb 01 14:55:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 14:55:17 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:18 compute-0 ceph-mon[75179]: 2.19 scrub starts
Feb 01 14:55:18 compute-0 ceph-mon[75179]: 2.19 scrub ok
Feb 01 14:55:18 compute-0 ceph-mon[75179]: 4.e scrub starts
Feb 01 14:55:18 compute-0 ceph-mon[75179]: 4.e scrub ok
Feb 01 14:55:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:55:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:55:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:55:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:55:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:55:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:55:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 14:55:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 14:55:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 14:55:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 14:55:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 14:55:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 14:55:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 14:55:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 14:55:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 14:55:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 14:55:19 compute-0 ceph-mon[75179]: pgmap v261: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:19 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Feb 01 14:55:19 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Feb 01 14:55:19 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:19 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Feb 01 14:55:19 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Feb 01 14:55:20 compute-0 ceph-mon[75179]: 2.5 scrub starts
Feb 01 14:55:20 compute-0 ceph-mon[75179]: 2.5 scrub ok
Feb 01 14:55:20 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.b scrub starts
Feb 01 14:55:20 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.b scrub ok
Feb 01 14:55:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:55:20 compute-0 sshd-session[110114]: Accepted publickey for zuul from 192.168.122.30 port 50520 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:55:20 compute-0 systemd-logind[786]: New session 37 of user zuul.
Feb 01 14:55:20 compute-0 systemd[1]: Started Session 37 of User zuul.
Feb 01 14:55:20 compute-0 sshd-session[110114]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:55:21 compute-0 ceph-mon[75179]: pgmap v262: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:21 compute-0 ceph-mon[75179]: 5.1e scrub starts
Feb 01 14:55:21 compute-0 ceph-mon[75179]: 5.1e scrub ok
Feb 01 14:55:21 compute-0 ceph-mon[75179]: 10.b scrub starts
Feb 01 14:55:21 compute-0 ceph-mon[75179]: 10.b scrub ok
Feb 01 14:55:21 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:21 compute-0 python3.9[110267]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:55:22 compute-0 ceph-mon[75179]: pgmap v263: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:22 compute-0 python3.9[110421]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:55:23 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Feb 01 14:55:23 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Feb 01 14:55:23 compute-0 sudo[110575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwguzvymylrourthgzmkslwzqjjmtjif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957723.0981526-35-73919732466949/AnsiballZ_setup.py'
Feb 01 14:55:23 compute-0 sudo[110575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:23 compute-0 python3.9[110577]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 01 14:55:23 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:23 compute-0 sudo[110575]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:24 compute-0 sudo[110659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztmukcqosronvfqiiwqqmuoliywskble ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957723.0981526-35-73919732466949/AnsiballZ_dnf.py'
Feb 01 14:55:24 compute-0 sudo[110659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:24 compute-0 python3.9[110661]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 14:55:24 compute-0 ceph-mon[75179]: 2.3 scrub starts
Feb 01 14:55:24 compute-0 ceph-mon[75179]: 2.3 scrub ok
Feb 01 14:55:24 compute-0 ceph-mon[75179]: pgmap v264: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:24 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Feb 01 14:55:24 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Feb 01 14:55:25 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Feb 01 14:55:25 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Feb 01 14:55:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:55:25 compute-0 sudo[110659]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:25 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:25 compute-0 ceph-mon[75179]: 2.18 scrub starts
Feb 01 14:55:25 compute-0 ceph-mon[75179]: 2.18 scrub ok
Feb 01 14:55:25 compute-0 ceph-mon[75179]: 7.2 scrub starts
Feb 01 14:55:25 compute-0 ceph-mon[75179]: 7.2 scrub ok
Feb 01 14:55:26 compute-0 sudo[110812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdhhalrvmxykepmkmdutaarqkqdegkdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957725.839627-47-34677815331693/AnsiballZ_setup.py'
Feb 01 14:55:26 compute-0 sudo[110812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:26 compute-0 python3.9[110814]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 01 14:55:26 compute-0 sudo[110812]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:26 compute-0 ceph-mon[75179]: pgmap v265: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:27 compute-0 sudo[111007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeusookcwvlmtmyvjocrtvziuystdtbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957726.8285239-58-194433504599868/AnsiballZ_file.py'
Feb 01 14:55:27 compute-0 sudo[111007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:27 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Feb 01 14:55:27 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Feb 01 14:55:27 compute-0 python3.9[111009]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:55:27 compute-0 sudo[111007]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:27 compute-0 ceph-mon[75179]: 8.2 scrub starts
Feb 01 14:55:27 compute-0 ceph-mon[75179]: 8.2 scrub ok
Feb 01 14:55:27 compute-0 sudo[111159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydjgibxmopkgmreclrznqcyaljlhkizi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957727.478959-66-209723736013128/AnsiballZ_command.py'
Feb 01 14:55:27 compute-0 sudo[111159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:55:27 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 14:55:28 compute-0 python3.9[111161]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:55:28 compute-0 sudo[111159]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:28 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Feb 01 14:55:28 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Feb 01 14:55:28 compute-0 sudo[111324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frdbhxtwblspabaexahynzorxaivhvsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957728.3526387-74-113295701371412/AnsiballZ_stat.py'
Feb 01 14:55:28 compute-0 sudo[111324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:28 compute-0 ceph-mon[75179]: pgmap v266: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:28 compute-0 python3.9[111326]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:55:28 compute-0 sudo[111324]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:28 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Feb 01 14:55:28 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Feb 01 14:55:29 compute-0 sudo[111402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbxbmaanviuztgmnpoyhpvovfnzxgduv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957728.3526387-74-113295701371412/AnsiballZ_file.py'
Feb 01 14:55:29 compute-0 sudo[111402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:29 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Feb 01 14:55:29 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Feb 01 14:55:29 compute-0 python3.9[111404]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:55:29 compute-0 sudo[111402]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:29 compute-0 sudo[111554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcmjdjpymlrwqpfihkzpzrsvbtfiieoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957729.4823914-86-29311194091027/AnsiballZ_stat.py'
Feb 01 14:55:29 compute-0 sudo[111554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:29 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:29 compute-0 ceph-mon[75179]: 4.5 scrub starts
Feb 01 14:55:29 compute-0 ceph-mon[75179]: 4.5 scrub ok
Feb 01 14:55:29 compute-0 ceph-mon[75179]: 10.4 scrub starts
Feb 01 14:55:29 compute-0 ceph-mon[75179]: 10.4 scrub ok
Feb 01 14:55:29 compute-0 ceph-mon[75179]: 4.1 scrub starts
Feb 01 14:55:29 compute-0 ceph-mon[75179]: 4.1 scrub ok
Feb 01 14:55:29 compute-0 python3.9[111556]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:55:30 compute-0 sudo[111554]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:30 compute-0 sudo[111632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opmmsdozgeqqcryjufhoigbssracglee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957729.4823914-86-29311194091027/AnsiballZ_file.py'
Feb 01 14:55:30 compute-0 sudo[111632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:30 compute-0 python3.9[111634]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:55:30 compute-0 sudo[111632]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:55:30 compute-0 sudo[111784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmlqyfqqajqrrsxbeoydgrzdoqygmulh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957730.5725534-99-73699184214220/AnsiballZ_ini_file.py'
Feb 01 14:55:30 compute-0 sudo[111784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:30 compute-0 ceph-mon[75179]: pgmap v267: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:31 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Feb 01 14:55:31 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Feb 01 14:55:31 compute-0 python3.9[111786]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:55:31 compute-0 sudo[111784]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:31 compute-0 sudo[111936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nchsqkznkpugyzrktvwvobpwghzrernn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957731.2372582-99-183735185170085/AnsiballZ_ini_file.py'
Feb 01 14:55:31 compute-0 sudo[111936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:31 compute-0 python3.9[111938]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:55:31 compute-0 sudo[111936]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:31 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:31 compute-0 ceph-mon[75179]: 7.6 scrub starts
Feb 01 14:55:31 compute-0 ceph-mon[75179]: 7.6 scrub ok
Feb 01 14:55:32 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.c scrub starts
Feb 01 14:55:32 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.c scrub ok
Feb 01 14:55:32 compute-0 sudo[112088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynmsfasjzfbmooykqmvyfseqeslntlbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957731.839597-99-243629904317161/AnsiballZ_ini_file.py'
Feb 01 14:55:32 compute-0 sudo[112088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:32 compute-0 python3.9[112090]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:55:32 compute-0 sudo[112088]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:32 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.d scrub starts
Feb 01 14:55:32 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.d scrub ok
Feb 01 14:55:32 compute-0 sudo[112240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrtuzfehprwwfqcblxstfgyhfkregrte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957732.3716114-99-68601452093352/AnsiballZ_ini_file.py'
Feb 01 14:55:32 compute-0 sudo[112240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:32 compute-0 python3.9[112242]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:55:32 compute-0 sudo[112240]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:32 compute-0 ceph-mon[75179]: pgmap v268: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:32 compute-0 ceph-mon[75179]: 3.c scrub starts
Feb 01 14:55:32 compute-0 ceph-mon[75179]: 3.c scrub ok
Feb 01 14:55:32 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Feb 01 14:55:32 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Feb 01 14:55:33 compute-0 sudo[112392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuwfkewciqcpvwdoubivosqikzlivqpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957732.9321902-130-141843722209347/AnsiballZ_dnf.py'
Feb 01 14:55:33 compute-0 sudo[112392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:33 compute-0 python3.9[112394]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 14:55:33 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:33 compute-0 ceph-mon[75179]: 2.d scrub starts
Feb 01 14:55:33 compute-0 ceph-mon[75179]: 2.d scrub ok
Feb 01 14:55:33 compute-0 ceph-mon[75179]: 11.4 scrub starts
Feb 01 14:55:33 compute-0 ceph-mon[75179]: 11.4 scrub ok
Feb 01 14:55:34 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Feb 01 14:55:34 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Feb 01 14:55:34 compute-0 sudo[112392]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:34 compute-0 ceph-mon[75179]: pgmap v269: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:34 compute-0 ceph-mon[75179]: 7.5 scrub starts
Feb 01 14:55:34 compute-0 ceph-mon[75179]: 7.5 scrub ok
Feb 01 14:55:35 compute-0 sudo[112545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzpatiigirupcipyczrthmuyikfbgnsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957734.9527345-141-73265427619542/AnsiballZ_setup.py'
Feb 01 14:55:35 compute-0 sudo[112545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:35 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Feb 01 14:55:35 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Feb 01 14:55:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:55:35 compute-0 python3.9[112547]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:55:35 compute-0 sudo[112545]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:35 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:35 compute-0 sudo[112699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwqtonvasaiqauuaqodubajmcsxaxnav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957735.7026134-149-177991123550679/AnsiballZ_stat.py'
Feb 01 14:55:35 compute-0 sudo[112699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:36 compute-0 ceph-mon[75179]: 4.4 scrub starts
Feb 01 14:55:36 compute-0 ceph-mon[75179]: 4.4 scrub ok
Feb 01 14:55:36 compute-0 python3.9[112701]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:55:36 compute-0 sudo[112699]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:36 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Feb 01 14:55:36 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Feb 01 14:55:36 compute-0 sudo[112851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viezpxhwfjrlpmdtzmeuwdaswazpjfdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957736.35044-158-30556311585137/AnsiballZ_stat.py'
Feb 01 14:55:36 compute-0 sudo[112851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:36 compute-0 python3.9[112853]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:55:36 compute-0 sudo[112851]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:37 compute-0 ceph-mon[75179]: pgmap v270: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:37 compute-0 ceph-mon[75179]: 11.2 scrub starts
Feb 01 14:55:37 compute-0 ceph-mon[75179]: 11.2 scrub ok
Feb 01 14:55:37 compute-0 sudo[113003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajkvmpmgoqetwfketaroywjfmiwrmmkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957737.0955496-168-22869994659925/AnsiballZ_command.py'
Feb 01 14:55:37 compute-0 sudo[113003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:37 compute-0 python3.9[113005]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:55:37 compute-0 sudo[113003]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:37 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:38 compute-0 sudo[113156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnvfdfsuljxtjfirdrdmxynkmjwgocxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957737.7803037-178-230740971227932/AnsiballZ_service_facts.py'
Feb 01 14:55:38 compute-0 sudo[113156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:38 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Feb 01 14:55:38 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Feb 01 14:55:38 compute-0 python3.9[113158]: ansible-service_facts Invoked
Feb 01 14:55:38 compute-0 network[113175]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 01 14:55:38 compute-0 network[113176]: 'network-scripts' will be removed from distribution in near future.
Feb 01 14:55:38 compute-0 network[113177]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 01 14:55:39 compute-0 ceph-mon[75179]: pgmap v271: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:39 compute-0 ceph-mon[75179]: 11.9 scrub starts
Feb 01 14:55:39 compute-0 ceph-mon[75179]: 11.9 scrub ok
Feb 01 14:55:39 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.d scrub starts
Feb 01 14:55:39 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.d scrub ok
Feb 01 14:55:39 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:40 compute-0 ceph-mon[75179]: 8.d scrub starts
Feb 01 14:55:40 compute-0 ceph-mon[75179]: 8.d scrub ok
Feb 01 14:55:40 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Feb 01 14:55:40 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Feb 01 14:55:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:55:40 compute-0 sudo[113156]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:41 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Feb 01 14:55:41 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Feb 01 14:55:41 compute-0 ceph-mon[75179]: pgmap v272: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:41 compute-0 ceph-mon[75179]: 8.4 scrub starts
Feb 01 14:55:41 compute-0 ceph-mon[75179]: 8.4 scrub ok
Feb 01 14:55:41 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.a scrub starts
Feb 01 14:55:41 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.a scrub ok
Feb 01 14:55:41 compute-0 sudo[113460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcjvbjqfqnvemohjxbtojgguucsbbndx ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769957741.2745073-193-65591860688173/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769957741.2745073-193-65591860688173/args'
Feb 01 14:55:41 compute-0 sudo[113460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:41 compute-0 sudo[113460]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:41 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:42 compute-0 ceph-mon[75179]: 7.9 scrub starts
Feb 01 14:55:42 compute-0 ceph-mon[75179]: 7.9 scrub ok
Feb 01 14:55:42 compute-0 ceph-mon[75179]: 4.a scrub starts
Feb 01 14:55:42 compute-0 ceph-mon[75179]: 4.a scrub ok
Feb 01 14:55:42 compute-0 sudo[113627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezysdtmvryiqxdsuhxubwzyxncslksvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957741.865656-204-280873603762160/AnsiballZ_dnf.py'
Feb 01 14:55:42 compute-0 sudo[113627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:42 compute-0 python3.9[113629]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 14:55:42 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Feb 01 14:55:42 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Feb 01 14:55:43 compute-0 ceph-mon[75179]: pgmap v273: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:43 compute-0 ceph-mon[75179]: 7.8 scrub starts
Feb 01 14:55:43 compute-0 ceph-mon[75179]: 7.8 scrub ok
Feb 01 14:55:43 compute-0 sudo[113627]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:43 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:44 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.f scrub starts
Feb 01 14:55:44 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.f scrub ok
Feb 01 14:55:44 compute-0 sudo[113780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sobegfqwbqwtrorwojyrktdjimjlhdws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957743.8215656-217-67486391285485/AnsiballZ_package_facts.py'
Feb 01 14:55:44 compute-0 sudo[113780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:44 compute-0 python3.9[113782]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Feb 01 14:55:44 compute-0 sudo[113780]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:45 compute-0 ceph-mon[75179]: pgmap v274: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:45 compute-0 ceph-mon[75179]: 5.f scrub starts
Feb 01 14:55:45 compute-0 ceph-mon[75179]: 5.f scrub ok
Feb 01 14:55:45 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Feb 01 14:55:45 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Feb 01 14:55:45 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.a scrub starts
Feb 01 14:55:45 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.a scrub ok
Feb 01 14:55:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:55:45 compute-0 sudo[113932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhbbxqibpvpxcrhdgtijxrbyibzddyzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957745.3743045-227-163000913475703/AnsiballZ_stat.py'
Feb 01 14:55:45 compute-0 sudo[113932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:45 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:45 compute-0 python3.9[113934]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:55:45 compute-0 sudo[113932]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:46 compute-0 ceph-mon[75179]: 11.1 scrub starts
Feb 01 14:55:46 compute-0 ceph-mon[75179]: 11.1 scrub ok
Feb 01 14:55:46 compute-0 ceph-mon[75179]: 7.a scrub starts
Feb 01 14:55:46 compute-0 ceph-mon[75179]: 7.a scrub ok
Feb 01 14:55:46 compute-0 sudo[114010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgmmpzbscjhgzvwgyyaitbuidlodkulu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957745.3743045-227-163000913475703/AnsiballZ_file.py'
Feb 01 14:55:46 compute-0 sudo[114010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:46 compute-0 python3.9[114012]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:55:46 compute-0 sudo[114010]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:46 compute-0 sudo[114162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgalkliyyroopbgkrcogpemcmpoaygwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957746.5560408-239-150527103067050/AnsiballZ_stat.py'
Feb 01 14:55:46 compute-0 sudo[114162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:47 compute-0 ceph-mon[75179]: pgmap v275: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:47 compute-0 python3.9[114164]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:55:47 compute-0 sudo[114162]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:47 compute-0 sudo[114240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huyzoexfbuoqxksfklihazsrsjoskeoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957746.5560408-239-150527103067050/AnsiballZ_file.py'
Feb 01 14:55:47 compute-0 sudo[114240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:47 compute-0 python3.9[114242]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:55:47 compute-0 sudo[114240]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:48 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.f scrub starts
Feb 01 14:55:48 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.f scrub ok
Feb 01 14:55:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:55:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:55:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:55:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:55:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:55:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:55:48 compute-0 sudo[114393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvfzmglizdsidjbzuuczobxofvivvcud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957748.483213-257-93492010423462/AnsiballZ_lineinfile.py'
Feb 01 14:55:48 compute-0 sudo[114393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:49 compute-0 ceph-mon[75179]: pgmap v276: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:49 compute-0 ceph-mon[75179]: 3.f scrub starts
Feb 01 14:55:49 compute-0 ceph-mon[75179]: 3.f scrub ok
Feb 01 14:55:49 compute-0 python3.9[114395]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:55:49 compute-0 sudo[114393]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:49 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Feb 01 14:55:49 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Feb 01 14:55:49 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:50 compute-0 ceph-mon[75179]: 11.8 scrub starts
Feb 01 14:55:50 compute-0 ceph-mon[75179]: 11.8 scrub ok
Feb 01 14:55:50 compute-0 sudo[114545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oembjyvslafcheowhkztswrfotwwxcpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957749.6979399-272-63124119149825/AnsiballZ_setup.py'
Feb 01 14:55:50 compute-0 sudo[114545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:50 compute-0 python3.9[114547]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 01 14:55:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:55:50 compute-0 sudo[114545]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:51 compute-0 ceph-mon[75179]: pgmap v277: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.e scrub starts
Feb 01 14:55:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.e scrub ok
Feb 01 14:55:51 compute-0 sudo[114629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwntsvscegmllwlekvaaebyunsflaisj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957749.6979399-272-63124119149825/AnsiballZ_systemd.py'
Feb 01 14:55:51 compute-0 sudo[114629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:51 compute-0 python3.9[114631]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 14:55:51 compute-0 sudo[114629]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:51 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:51 compute-0 systemd[76558]: Created slice User Background Tasks Slice.
Feb 01 14:55:51 compute-0 systemd[76558]: Starting Cleanup of User's Temporary Files and Directories...
Feb 01 14:55:51 compute-0 systemd[76558]: Finished Cleanup of User's Temporary Files and Directories.
Feb 01 14:55:52 compute-0 ceph-mon[75179]: 7.e scrub starts
Feb 01 14:55:52 compute-0 ceph-mon[75179]: 7.e scrub ok
Feb 01 14:55:52 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Feb 01 14:55:52 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Feb 01 14:55:52 compute-0 sshd-session[110117]: Connection closed by 192.168.122.30 port 50520
Feb 01 14:55:52 compute-0 sshd-session[110114]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:55:52 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Feb 01 14:55:52 compute-0 systemd[1]: session-37.scope: Consumed 21.201s CPU time.
Feb 01 14:55:52 compute-0 systemd-logind[786]: Session 37 logged out. Waiting for processes to exit.
Feb 01 14:55:52 compute-0 systemd-logind[786]: Removed session 37.
Feb 01 14:55:53 compute-0 ceph-mon[75179]: pgmap v278: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:53 compute-0 ceph-mon[75179]: 2.9 scrub starts
Feb 01 14:55:53 compute-0 ceph-mon[75179]: 2.9 scrub ok
Feb 01 14:55:53 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:54 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Feb 01 14:55:54 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Feb 01 14:55:54 compute-0 ceph-mon[75179]: pgmap v279: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:55 compute-0 ceph-mon[75179]: 8.9 scrub starts
Feb 01 14:55:55 compute-0 ceph-mon[75179]: 8.9 scrub ok
Feb 01 14:55:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:55:55 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:56 compute-0 ceph-mon[75179]: pgmap v280: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:56 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Feb 01 14:55:56 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Feb 01 14:55:57 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Feb 01 14:55:57 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Feb 01 14:55:57 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Feb 01 14:55:57 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Feb 01 14:55:57 compute-0 ceph-mon[75179]: 7.15 scrub starts
Feb 01 14:55:57 compute-0 ceph-mon[75179]: 7.15 scrub ok
Feb 01 14:55:57 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Feb 01 14:55:57 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Feb 01 14:55:57 compute-0 sshd-session[114659]: Accepted publickey for zuul from 192.168.122.30 port 43228 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:55:57 compute-0 systemd-logind[786]: New session 38 of user zuul.
Feb 01 14:55:57 compute-0 systemd[1]: Started Session 38 of User zuul.
Feb 01 14:55:57 compute-0 sshd-session[114659]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:55:57 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:57 compute-0 sudo[114812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhfaxdkbzbqnqompdqivcnawqfcrndyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957757.5104675-17-69287438010773/AnsiballZ_file.py'
Feb 01 14:55:57 compute-0 sudo[114812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:58 compute-0 python3.9[114814]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:55:58 compute-0 sudo[114812]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:58 compute-0 ceph-mon[75179]: 11.6 scrub starts
Feb 01 14:55:58 compute-0 ceph-mon[75179]: 11.6 scrub ok
Feb 01 14:55:58 compute-0 ceph-mon[75179]: 4.9 scrub starts
Feb 01 14:55:58 compute-0 ceph-mon[75179]: 4.9 scrub ok
Feb 01 14:55:58 compute-0 ceph-mon[75179]: 11.18 scrub starts
Feb 01 14:55:58 compute-0 ceph-mon[75179]: 11.18 scrub ok
Feb 01 14:55:58 compute-0 ceph-mon[75179]: pgmap v281: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:58 compute-0 sudo[114964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuatedmegetikvuadfuxjgjxucczqfds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957758.34965-29-43163624770705/AnsiballZ_stat.py'
Feb 01 14:55:58 compute-0 sudo[114964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:58 compute-0 python3.9[114966]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:55:58 compute-0 sudo[114964]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:59 compute-0 sudo[115042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbvujxmewuqgqomrbwttdtxkpmqvwbbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957758.34965-29-43163624770705/AnsiballZ_file.py'
Feb 01 14:55:59 compute-0 sudo[115042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:55:59 compute-0 python3.9[115044]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:55:59 compute-0 sudo[115042]: pam_unix(sudo:session): session closed for user root
Feb 01 14:55:59 compute-0 sshd-session[114662]: Connection closed by 192.168.122.30 port 43228
Feb 01 14:55:59 compute-0 sshd-session[114659]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:55:59 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Feb 01 14:55:59 compute-0 systemd[1]: session-38.scope: Consumed 1.415s CPU time.
Feb 01 14:55:59 compute-0 systemd-logind[786]: Session 38 logged out. Waiting for processes to exit.
Feb 01 14:55:59 compute-0 systemd-logind[786]: Removed session 38.
Feb 01 14:55:59 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:55:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.f scrub starts
Feb 01 14:56:00 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.f scrub ok
Feb 01 14:56:00 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Feb 01 14:56:00 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Feb 01 14:56:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:56:00 compute-0 ceph-mon[75179]: pgmap v282: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:00 compute-0 ceph-mon[75179]: 7.f scrub starts
Feb 01 14:56:00 compute-0 ceph-mon[75179]: 7.f scrub ok
Feb 01 14:56:01 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Feb 01 14:56:01 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Feb 01 14:56:01 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Feb 01 14:56:01 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Feb 01 14:56:01 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:01 compute-0 ceph-mon[75179]: 11.1a scrub starts
Feb 01 14:56:01 compute-0 ceph-mon[75179]: 11.1a scrub ok
Feb 01 14:56:01 compute-0 ceph-mon[75179]: 7.3 scrub starts
Feb 01 14:56:01 compute-0 ceph-mon[75179]: 7.3 scrub ok
Feb 01 14:56:01 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Feb 01 14:56:02 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Feb 01 14:56:02 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Feb 01 14:56:02 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Feb 01 14:56:02 compute-0 ceph-mon[75179]: 5.9 scrub starts
Feb 01 14:56:02 compute-0 ceph-mon[75179]: 5.9 scrub ok
Feb 01 14:56:02 compute-0 ceph-mon[75179]: pgmap v283: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:02 compute-0 ceph-mon[75179]: 3.17 scrub starts
Feb 01 14:56:02 compute-0 ceph-mon[75179]: 3.17 scrub ok
Feb 01 14:56:03 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:03 compute-0 ceph-mon[75179]: 11.1b scrub starts
Feb 01 14:56:03 compute-0 ceph-mon[75179]: 11.1b scrub ok
Feb 01 14:56:03 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Feb 01 14:56:03 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Feb 01 14:56:04 compute-0 ceph-mon[75179]: pgmap v284: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:04 compute-0 ceph-mon[75179]: 7.13 scrub starts
Feb 01 14:56:04 compute-0 ceph-mon[75179]: 7.13 scrub ok
Feb 01 14:56:05 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Feb 01 14:56:05 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Feb 01 14:56:05 compute-0 sshd-session[115069]: Accepted publickey for zuul from 192.168.122.30 port 35850 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:56:05 compute-0 systemd-logind[786]: New session 39 of user zuul.
Feb 01 14:56:05 compute-0 systemd[1]: Started Session 39 of User zuul.
Feb 01 14:56:05 compute-0 sshd-session[115069]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:56:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:56:05 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:05 compute-0 ceph-mon[75179]: 2.16 scrub starts
Feb 01 14:56:05 compute-0 ceph-mon[75179]: 2.16 scrub ok
Feb 01 14:56:06 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Feb 01 14:56:06 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Feb 01 14:56:06 compute-0 python3.9[115222]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:56:06 compute-0 ceph-mon[75179]: pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:07 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Feb 01 14:56:07 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Feb 01 14:56:07 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Feb 01 14:56:07 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Feb 01 14:56:07 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Feb 01 14:56:07 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Feb 01 14:56:07 compute-0 sudo[115376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsofyqongaoxfxkcvogixzumlcwywrau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957766.944449-28-198107893610852/AnsiballZ_file.py'
Feb 01 14:56:07 compute-0 sudo[115376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:07 compute-0 python3.9[115378]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:07 compute-0 sudo[115376]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:07 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:07 compute-0 ceph-mon[75179]: 10.6 scrub starts
Feb 01 14:56:07 compute-0 ceph-mon[75179]: 10.6 scrub ok
Feb 01 14:56:07 compute-0 ceph-mon[75179]: 10.1e scrub starts
Feb 01 14:56:07 compute-0 ceph-mon[75179]: 10.1e scrub ok
Feb 01 14:56:08 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Feb 01 14:56:08 compute-0 sudo[115551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uidcjoyfnauvkigtjmgusysviwvhrkkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957767.7138226-36-121608938284332/AnsiballZ_stat.py'
Feb 01 14:56:08 compute-0 sudo[115551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:08 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Feb 01 14:56:08 compute-0 python3.9[115553]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:56:08 compute-0 sudo[115551]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:08 compute-0 sudo[115629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efyjjekldfivptlcngemkgfqdeodaftk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957767.7138226-36-121608938284332/AnsiballZ_file.py'
Feb 01 14:56:08 compute-0 sudo[115629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:08 compute-0 python3.9[115631]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.hwlpz2yw recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:08 compute-0 sudo[115629]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:08 compute-0 ceph-mon[75179]: 8.1b scrub starts
Feb 01 14:56:08 compute-0 ceph-mon[75179]: 8.1b scrub ok
Feb 01 14:56:08 compute-0 ceph-mon[75179]: 10.19 scrub starts
Feb 01 14:56:08 compute-0 ceph-mon[75179]: 10.19 scrub ok
Feb 01 14:56:08 compute-0 ceph-mon[75179]: pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:09 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Feb 01 14:56:09 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Feb 01 14:56:09 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Feb 01 14:56:09 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Feb 01 14:56:09 compute-0 sudo[115781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjoyysqfmmmtwbzdqlgwuzdwinqwstjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957769.1636124-56-118697928127948/AnsiballZ_stat.py'
Feb 01 14:56:09 compute-0 sudo[115781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:09 compute-0 python3.9[115783]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:56:09 compute-0 sudo[115781]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:09 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:09 compute-0 sudo[115859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-losxsiduhgmmdfibxfqsqfrmezvjaiiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957769.1636124-56-118697928127948/AnsiballZ_file.py'
Feb 01 14:56:09 compute-0 sudo[115859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:09 compute-0 ceph-mon[75179]: 4.13 scrub starts
Feb 01 14:56:09 compute-0 ceph-mon[75179]: 4.13 scrub ok
Feb 01 14:56:09 compute-0 ceph-mon[75179]: 3.9 scrub starts
Feb 01 14:56:09 compute-0 ceph-mon[75179]: 3.9 scrub ok
Feb 01 14:56:10 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Feb 01 14:56:10 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Feb 01 14:56:10 compute-0 python3.9[115861]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.qgg52tl8 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:10 compute-0 sudo[115859]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:10 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Feb 01 14:56:10 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Feb 01 14:56:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:56:10 compute-0 sudo[116011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmmwlrbxecnxwqlqhpoymskjwdedksuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957770.288962-69-167079890541015/AnsiballZ_file.py'
Feb 01 14:56:10 compute-0 sudo[116011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:10 compute-0 python3.9[116013]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:56:10 compute-0 sudo[116011]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:10 compute-0 ceph-mon[75179]: 5.16 scrub starts
Feb 01 14:56:10 compute-0 ceph-mon[75179]: 5.16 scrub ok
Feb 01 14:56:10 compute-0 ceph-mon[75179]: pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:10 compute-0 ceph-mon[75179]: 8.1d scrub starts
Feb 01 14:56:10 compute-0 ceph-mon[75179]: 8.1d scrub ok
Feb 01 14:56:11 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Feb 01 14:56:11 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Feb 01 14:56:11 compute-0 sudo[116163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwgyspyytrghjzlzkuftqstpeatgeheu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957770.8601582-77-6668262579430/AnsiballZ_stat.py'
Feb 01 14:56:11 compute-0 sudo[116163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:11 compute-0 python3.9[116165]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:56:11 compute-0 sudo[116163]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:11 compute-0 sudo[116241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svqwuoovltccsjxosracgumrhvnpbvvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957770.8601582-77-6668262579430/AnsiballZ_file.py'
Feb 01 14:56:11 compute-0 sudo[116241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:11 compute-0 python3.9[116243]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:56:11 compute-0 sudo[116241]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:11 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:11 compute-0 ceph-mon[75179]: 11.12 scrub starts
Feb 01 14:56:11 compute-0 ceph-mon[75179]: 11.12 scrub ok
Feb 01 14:56:11 compute-0 ceph-mon[75179]: 8.1f scrub starts
Feb 01 14:56:11 compute-0 ceph-mon[75179]: 8.1f scrub ok
Feb 01 14:56:12 compute-0 sudo[116393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbfvrtgglrmmjljefmxoorjqglixjlpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957771.8594503-77-188779556072591/AnsiballZ_stat.py'
Feb 01 14:56:12 compute-0 sudo[116393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:12 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Feb 01 14:56:12 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Feb 01 14:56:12 compute-0 python3.9[116395]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:56:12 compute-0 sudo[116393]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:12 compute-0 sudo[116471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kndlzgaymiubreebtbympknsdnfeoclx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957771.8594503-77-188779556072591/AnsiballZ_file.py'
Feb 01 14:56:12 compute-0 sudo[116471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:12 compute-0 python3.9[116473]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:56:12 compute-0 sudo[116471]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:12 compute-0 ceph-mon[75179]: pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:12 compute-0 ceph-mon[75179]: 11.10 scrub starts
Feb 01 14:56:12 compute-0 ceph-mon[75179]: 11.10 scrub ok
Feb 01 14:56:13 compute-0 sudo[116623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdshgavaoxzalyfdmphghcmupqazfufx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957772.8332841-100-163453659802327/AnsiballZ_file.py'
Feb 01 14:56:13 compute-0 sudo[116623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:13 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Feb 01 14:56:13 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Feb 01 14:56:13 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Feb 01 14:56:13 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Feb 01 14:56:13 compute-0 python3.9[116625]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:13 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Feb 01 14:56:13 compute-0 sudo[116623]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:13 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Feb 01 14:56:13 compute-0 sudo[116775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thlzkfpyufbyhklxxsnivynybwxgowob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957773.446019-108-76829955503502/AnsiballZ_stat.py'
Feb 01 14:56:13 compute-0 sudo[116775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:13 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:13 compute-0 python3.9[116777]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:56:13 compute-0 sudo[116775]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:13 compute-0 ceph-mon[75179]: 8.18 scrub starts
Feb 01 14:56:13 compute-0 ceph-mon[75179]: 8.18 scrub ok
Feb 01 14:56:13 compute-0 ceph-mon[75179]: 5.12 scrub starts
Feb 01 14:56:13 compute-0 ceph-mon[75179]: 5.12 scrub ok
Feb 01 14:56:14 compute-0 sudo[116853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obacwomtwapwcctxxjnzisgltxaqdjtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957773.446019-108-76829955503502/AnsiballZ_file.py'
Feb 01 14:56:14 compute-0 sudo[116853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:14 compute-0 sudo[116856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:56:14 compute-0 sudo[116856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:56:14 compute-0 sudo[116856]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:14 compute-0 sudo[116881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 14:56:14 compute-0 sudo[116881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:56:14 compute-0 python3.9[116855]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:14 compute-0 sudo[116853]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:14 compute-0 sudo[117074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gniatztersjkcugkapissxfcbjqpvajh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957774.6936817-120-155481524900474/AnsiballZ_stat.py'
Feb 01 14:56:14 compute-0 sudo[117074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:14 compute-0 sudo[116881]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:15 compute-0 ceph-mon[75179]: 11.1c scrub starts
Feb 01 14:56:15 compute-0 ceph-mon[75179]: 11.1c scrub ok
Feb 01 14:56:15 compute-0 ceph-mon[75179]: pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:56:15 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:56:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 14:56:15 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:56:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 14:56:15 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:56:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 14:56:15 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:56:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 14:56:15 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:56:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:56:15 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:56:15 compute-0 sudo[117089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:56:15 compute-0 sudo[117089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:56:15 compute-0 sudo[117089]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:15 compute-0 python3.9[117076]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:56:15 compute-0 sudo[117114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 14:56:15 compute-0 sudo[117114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:56:15 compute-0 sudo[117074]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:15 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Feb 01 14:56:15 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Feb 01 14:56:15 compute-0 sudo[117214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcetmpeaikvrzluqifawjufreukpvqxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957774.6936817-120-155481524900474/AnsiballZ_file.py'
Feb 01 14:56:15 compute-0 sudo[117214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:15 compute-0 podman[117229]: 2026-02-01 14:56:15.391163494 +0000 UTC m=+0.041937454 container create cbd4023786ecea9e574735b0f1ce1508318f07e9facd39eb788b739cfe06025a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:56:15 compute-0 systemd[1]: Started libpod-conmon-cbd4023786ecea9e574735b0f1ce1508318f07e9facd39eb788b739cfe06025a.scope.
Feb 01 14:56:15 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:56:15 compute-0 podman[117229]: 2026-02-01 14:56:15.442342382 +0000 UTC m=+0.093116352 container init cbd4023786ecea9e574735b0f1ce1508318f07e9facd39eb788b739cfe06025a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ardinghelli, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 01 14:56:15 compute-0 podman[117229]: 2026-02-01 14:56:15.447733982 +0000 UTC m=+0.098507972 container start cbd4023786ecea9e574735b0f1ce1508318f07e9facd39eb788b739cfe06025a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ardinghelli, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:56:15 compute-0 nice_ardinghelli[117247]: 167 167
Feb 01 14:56:15 compute-0 systemd[1]: libpod-cbd4023786ecea9e574735b0f1ce1508318f07e9facd39eb788b739cfe06025a.scope: Deactivated successfully.
Feb 01 14:56:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:56:15 compute-0 podman[117229]: 2026-02-01 14:56:15.451124812 +0000 UTC m=+0.101898792 container attach cbd4023786ecea9e574735b0f1ce1508318f07e9facd39eb788b739cfe06025a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 01 14:56:15 compute-0 podman[117229]: 2026-02-01 14:56:15.452014209 +0000 UTC m=+0.102788199 container died cbd4023786ecea9e574735b0f1ce1508318f07e9facd39eb788b739cfe06025a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 01 14:56:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-775e359df296d55538c6904c6f8c574b881d2cb432dda41b5ff60a8b40982f6f-merged.mount: Deactivated successfully.
Feb 01 14:56:15 compute-0 podman[117229]: 2026-02-01 14:56:15.37754022 +0000 UTC m=+0.028314190 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:56:15 compute-0 podman[117229]: 2026-02-01 14:56:15.491912692 +0000 UTC m=+0.142686652 container remove cbd4023786ecea9e574735b0f1ce1508318f07e9facd39eb788b739cfe06025a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ardinghelli, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 01 14:56:15 compute-0 systemd[1]: libpod-conmon-cbd4023786ecea9e574735b0f1ce1508318f07e9facd39eb788b739cfe06025a.scope: Deactivated successfully.
Feb 01 14:56:15 compute-0 python3.9[117218]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:15 compute-0 sudo[117214]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:15 compute-0 podman[117276]: 2026-02-01 14:56:15.601262695 +0000 UTC m=+0.038844963 container create 7009e53648b4471cd833ec1bb8eac06ef2118dc09af3545c1d938b2d46e9580f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chaum, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Feb 01 14:56:15 compute-0 systemd[1]: Started libpod-conmon-7009e53648b4471cd833ec1bb8eac06ef2118dc09af3545c1d938b2d46e9580f.scope.
Feb 01 14:56:15 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6364105f03cc651f4aaf2a33ea80dc2a70ac9489de36279726fcb768406c7323/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6364105f03cc651f4aaf2a33ea80dc2a70ac9489de36279726fcb768406c7323/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6364105f03cc651f4aaf2a33ea80dc2a70ac9489de36279726fcb768406c7323/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6364105f03cc651f4aaf2a33ea80dc2a70ac9489de36279726fcb768406c7323/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6364105f03cc651f4aaf2a33ea80dc2a70ac9489de36279726fcb768406c7323/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:56:15 compute-0 podman[117276]: 2026-02-01 14:56:15.58659688 +0000 UTC m=+0.024179168 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:56:15 compute-0 podman[117276]: 2026-02-01 14:56:15.701449036 +0000 UTC m=+0.139031344 container init 7009e53648b4471cd833ec1bb8eac06ef2118dc09af3545c1d938b2d46e9580f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 01 14:56:15 compute-0 podman[117276]: 2026-02-01 14:56:15.707104443 +0000 UTC m=+0.144686711 container start 7009e53648b4471cd833ec1bb8eac06ef2118dc09af3545c1d938b2d46e9580f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:56:15 compute-0 podman[117276]: 2026-02-01 14:56:15.714042379 +0000 UTC m=+0.151624647 container attach 7009e53648b4471cd833ec1bb8eac06ef2118dc09af3545c1d938b2d46e9580f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chaum, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 01 14:56:15 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:16 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:56:16 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:56:16 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:56:16 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:56:16 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:56:16 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:56:16 compute-0 blissful_chaum[117313]: --> passed data devices: 0 physical, 3 LVM
Feb 01 14:56:16 compute-0 blissful_chaum[117313]: --> All data devices are unavailable
Feb 01 14:56:16 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Feb 01 14:56:16 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Feb 01 14:56:16 compute-0 systemd[1]: libpod-7009e53648b4471cd833ec1bb8eac06ef2118dc09af3545c1d938b2d46e9580f.scope: Deactivated successfully.
Feb 01 14:56:16 compute-0 podman[117276]: 2026-02-01 14:56:16.126976894 +0000 UTC m=+0.564559192 container died 7009e53648b4471cd833ec1bb8eac06ef2118dc09af3545c1d938b2d46e9580f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 01 14:56:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-6364105f03cc651f4aaf2a33ea80dc2a70ac9489de36279726fcb768406c7323-merged.mount: Deactivated successfully.
Feb 01 14:56:16 compute-0 podman[117276]: 2026-02-01 14:56:16.167230308 +0000 UTC m=+0.604812576 container remove 7009e53648b4471cd833ec1bb8eac06ef2118dc09af3545c1d938b2d46e9580f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 01 14:56:16 compute-0 systemd[1]: libpod-conmon-7009e53648b4471cd833ec1bb8eac06ef2118dc09af3545c1d938b2d46e9580f.scope: Deactivated successfully.
Feb 01 14:56:16 compute-0 sudo[117472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ietotkwnpvqkbpqrcpdfbpdghjjfxdlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957775.674173-132-65758542472949/AnsiballZ_systemd.py'
Feb 01 14:56:16 compute-0 sudo[117472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:16 compute-0 sudo[117114]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:16 compute-0 sudo[117475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:56:16 compute-0 sudo[117475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:56:16 compute-0 sudo[117475]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:16 compute-0 sudo[117500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 14:56:16 compute-0 sudo[117500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:56:16 compute-0 python3.9[117474]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 14:56:16 compute-0 systemd[1]: Reloading.
Feb 01 14:56:16 compute-0 systemd-rc-local-generator[117559]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:56:16 compute-0 systemd-sysv-generator[117562]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:56:16 compute-0 podman[117572]: 2026-02-01 14:56:16.599645191 +0000 UTC m=+0.040177922 container create 1de682ba3f681a50e5d066e1ab7f4cdce6b8b6b9d23097dcd39e586a2fcd4af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cray, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:56:16 compute-0 podman[117572]: 2026-02-01 14:56:16.576182445 +0000 UTC m=+0.016715146 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:56:16 compute-0 systemd[1]: Started libpod-conmon-1de682ba3f681a50e5d066e1ab7f4cdce6b8b6b9d23097dcd39e586a2fcd4af7.scope.
Feb 01 14:56:16 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:56:16 compute-0 podman[117572]: 2026-02-01 14:56:16.738851069 +0000 UTC m=+0.179383840 container init 1de682ba3f681a50e5d066e1ab7f4cdce6b8b6b9d23097dcd39e586a2fcd4af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 01 14:56:16 compute-0 podman[117572]: 2026-02-01 14:56:16.746069573 +0000 UTC m=+0.186602254 container start 1de682ba3f681a50e5d066e1ab7f4cdce6b8b6b9d23097dcd39e586a2fcd4af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cray, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 01 14:56:16 compute-0 podman[117572]: 2026-02-01 14:56:16.749406062 +0000 UTC m=+0.189938833 container attach 1de682ba3f681a50e5d066e1ab7f4cdce6b8b6b9d23097dcd39e586a2fcd4af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cray, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 01 14:56:16 compute-0 great_cray[117587]: 167 167
Feb 01 14:56:16 compute-0 podman[117572]: 2026-02-01 14:56:16.752232776 +0000 UTC m=+0.192765497 container died 1de682ba3f681a50e5d066e1ab7f4cdce6b8b6b9d23097dcd39e586a2fcd4af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:56:16 compute-0 systemd[1]: libpod-1de682ba3f681a50e5d066e1ab7f4cdce6b8b6b9d23097dcd39e586a2fcd4af7.scope: Deactivated successfully.
Feb 01 14:56:16 compute-0 sudo[117472]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-851627d0867326904f3e1771a30bfe83e47a32c577158af70a41ce7bef5b37ac-merged.mount: Deactivated successfully.
Feb 01 14:56:16 compute-0 podman[117572]: 2026-02-01 14:56:16.794496229 +0000 UTC m=+0.235028910 container remove 1de682ba3f681a50e5d066e1ab7f4cdce6b8b6b9d23097dcd39e586a2fcd4af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Feb 01 14:56:16 compute-0 systemd[1]: libpod-conmon-1de682ba3f681a50e5d066e1ab7f4cdce6b8b6b9d23097dcd39e586a2fcd4af7.scope: Deactivated successfully.
Feb 01 14:56:16 compute-0 podman[117637]: 2026-02-01 14:56:16.981028321 +0000 UTC m=+0.054270291 container create dec6256718a714252705ca2b1eafbbe4a62176679bdebd7ad3a8fdb3a0ef9b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 01 14:56:17 compute-0 systemd[1]: Started libpod-conmon-dec6256718a714252705ca2b1eafbbe4a62176679bdebd7ad3a8fdb3a0ef9b85.scope.
Feb 01 14:56:17 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:56:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95ca774324aa5b9d37e0af3e8f234c16cc5eeee6db44d324c78d6921987249b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:56:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95ca774324aa5b9d37e0af3e8f234c16cc5eeee6db44d324c78d6921987249b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:56:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95ca774324aa5b9d37e0af3e8f234c16cc5eeee6db44d324c78d6921987249b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:56:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95ca774324aa5b9d37e0af3e8f234c16cc5eeee6db44d324c78d6921987249b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:56:17 compute-0 podman[117637]: 2026-02-01 14:56:17.048372118 +0000 UTC m=+0.121614128 container init dec6256718a714252705ca2b1eafbbe4a62176679bdebd7ad3a8fdb3a0ef9b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:56:17 compute-0 ceph-mon[75179]: 3.16 scrub starts
Feb 01 14:56:17 compute-0 ceph-mon[75179]: 3.16 scrub ok
Feb 01 14:56:17 compute-0 ceph-mon[75179]: pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:17 compute-0 ceph-mon[75179]: 3.15 scrub starts
Feb 01 14:56:17 compute-0 ceph-mon[75179]: 3.15 scrub ok
Feb 01 14:56:17 compute-0 podman[117637]: 2026-02-01 14:56:17.055128008 +0000 UTC m=+0.128369978 container start dec6256718a714252705ca2b1eafbbe4a62176679bdebd7ad3a8fdb3a0ef9b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:56:17 compute-0 podman[117637]: 2026-02-01 14:56:16.965236092 +0000 UTC m=+0.038478092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:56:17 compute-0 podman[117637]: 2026-02-01 14:56:17.06227241 +0000 UTC m=+0.135514430 container attach dec6256718a714252705ca2b1eafbbe4a62176679bdebd7ad3a8fdb3a0ef9b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 01 14:56:17 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Feb 01 14:56:17 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Feb 01 14:56:17 compute-0 sudo[117783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwpuzfbivcutbytfieqebhfbadgfmfbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957776.9512951-140-109088213399692/AnsiballZ_stat.py'
Feb 01 14:56:17 compute-0 sudo[117783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:17 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Feb 01 14:56:17 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Feb 01 14:56:17 compute-0 great_shaw[117705]: {
Feb 01 14:56:17 compute-0 great_shaw[117705]:     "0": [
Feb 01 14:56:17 compute-0 great_shaw[117705]:         {
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "devices": [
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "/dev/loop3"
Feb 01 14:56:17 compute-0 great_shaw[117705]:             ],
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "lv_name": "ceph_lv0",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "lv_size": "21470642176",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "name": "ceph_lv0",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "tags": {
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.cluster_name": "ceph",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.crush_device_class": "",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.encrypted": "0",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.objectstore": "bluestore",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.osd_id": "0",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.type": "block",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.vdo": "0",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.with_tpm": "0"
Feb 01 14:56:17 compute-0 great_shaw[117705]:             },
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "type": "block",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "vg_name": "ceph_vg0"
Feb 01 14:56:17 compute-0 great_shaw[117705]:         }
Feb 01 14:56:17 compute-0 great_shaw[117705]:     ],
Feb 01 14:56:17 compute-0 great_shaw[117705]:     "1": [
Feb 01 14:56:17 compute-0 great_shaw[117705]:         {
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "devices": [
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "/dev/loop4"
Feb 01 14:56:17 compute-0 great_shaw[117705]:             ],
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "lv_name": "ceph_lv1",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "lv_size": "21470642176",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "name": "ceph_lv1",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "tags": {
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.cluster_name": "ceph",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.crush_device_class": "",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.encrypted": "0",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.objectstore": "bluestore",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.osd_id": "1",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.type": "block",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.vdo": "0",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.with_tpm": "0"
Feb 01 14:56:17 compute-0 great_shaw[117705]:             },
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "type": "block",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "vg_name": "ceph_vg1"
Feb 01 14:56:17 compute-0 great_shaw[117705]:         }
Feb 01 14:56:17 compute-0 great_shaw[117705]:     ],
Feb 01 14:56:17 compute-0 great_shaw[117705]:     "2": [
Feb 01 14:56:17 compute-0 great_shaw[117705]:         {
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "devices": [
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "/dev/loop5"
Feb 01 14:56:17 compute-0 great_shaw[117705]:             ],
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "lv_name": "ceph_lv2",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "lv_size": "21470642176",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "name": "ceph_lv2",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "tags": {
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.cluster_name": "ceph",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.crush_device_class": "",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.encrypted": "0",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.objectstore": "bluestore",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.osd_id": "2",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.type": "block",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.vdo": "0",
Feb 01 14:56:17 compute-0 great_shaw[117705]:                 "ceph.with_tpm": "0"
Feb 01 14:56:17 compute-0 great_shaw[117705]:             },
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "type": "block",
Feb 01 14:56:17 compute-0 great_shaw[117705]:             "vg_name": "ceph_vg2"
Feb 01 14:56:17 compute-0 great_shaw[117705]:         }
Feb 01 14:56:17 compute-0 great_shaw[117705]:     ]
Feb 01 14:56:17 compute-0 great_shaw[117705]: }
Feb 01 14:56:17 compute-0 systemd[1]: libpod-dec6256718a714252705ca2b1eafbbe4a62176679bdebd7ad3a8fdb3a0ef9b85.scope: Deactivated successfully.
Feb 01 14:56:17 compute-0 conmon[117705]: conmon dec6256718a714252705 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dec6256718a714252705ca2b1eafbbe4a62176679bdebd7ad3a8fdb3a0ef9b85.scope/container/memory.events
Feb 01 14:56:17 compute-0 podman[117637]: 2026-02-01 14:56:17.33609348 +0000 UTC m=+0.409335470 container died dec6256718a714252705ca2b1eafbbe4a62176679bdebd7ad3a8fdb3a0ef9b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Feb 01 14:56:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e95ca774324aa5b9d37e0af3e8f234c16cc5eeee6db44d324c78d6921987249b-merged.mount: Deactivated successfully.
Feb 01 14:56:17 compute-0 podman[117637]: 2026-02-01 14:56:17.378806617 +0000 UTC m=+0.452048607 container remove dec6256718a714252705ca2b1eafbbe4a62176679bdebd7ad3a8fdb3a0ef9b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shaw, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 01 14:56:17 compute-0 python3.9[117785]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:56:17 compute-0 systemd[1]: libpod-conmon-dec6256718a714252705ca2b1eafbbe4a62176679bdebd7ad3a8fdb3a0ef9b85.scope: Deactivated successfully.
Feb 01 14:56:17 compute-0 sudo[117500]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:17 compute-0 sudo[117783]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:17 compute-0 sudo[117804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:56:17 compute-0 sudo[117804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:56:17 compute-0 sudo[117804]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:17 compute-0 sudo[117841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 14:56:17 compute-0 sudo[117841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:56:17 compute-0 sudo[117927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzoayqqlkpxhwmbsjdtjhjczqdbepgmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957776.9512951-140-109088213399692/AnsiballZ_file.py'
Feb 01 14:56:17 compute-0 sudo[117927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_14:56:17
Feb 01 14:56:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 14:56:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 14:56:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['default.rgw.meta', 'images', '.mgr', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', 'default.rgw.control', 'volumes']
Feb 01 14:56:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 14:56:17 compute-0 podman[117940]: 2026-02-01 14:56:17.77651002 +0000 UTC m=+0.053251600 container create d8837107384a3b069c31d2ea4c9963e04b8f12f5711f43a23de1d22088fb1600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 01 14:56:17 compute-0 python3.9[117929]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:17 compute-0 sudo[117927]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:17 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:17 compute-0 systemd[1]: Started libpod-conmon-d8837107384a3b069c31d2ea4c9963e04b8f12f5711f43a23de1d22088fb1600.scope.
Feb 01 14:56:17 compute-0 podman[117940]: 2026-02-01 14:56:17.754315562 +0000 UTC m=+0.031057122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:56:17 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:56:17 compute-0 podman[117940]: 2026-02-01 14:56:17.86787766 +0000 UTC m=+0.144619230 container init d8837107384a3b069c31d2ea4c9963e04b8f12f5711f43a23de1d22088fb1600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:56:17 compute-0 podman[117940]: 2026-02-01 14:56:17.874413774 +0000 UTC m=+0.151155334 container start d8837107384a3b069c31d2ea4c9963e04b8f12f5711f43a23de1d22088fb1600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:56:17 compute-0 podman[117940]: 2026-02-01 14:56:17.877822345 +0000 UTC m=+0.154563895 container attach d8837107384a3b069c31d2ea4c9963e04b8f12f5711f43a23de1d22088fb1600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 01 14:56:17 compute-0 eloquent_hellman[117958]: 167 167
Feb 01 14:56:17 compute-0 systemd[1]: libpod-d8837107384a3b069c31d2ea4c9963e04b8f12f5711f43a23de1d22088fb1600.scope: Deactivated successfully.
Feb 01 14:56:17 compute-0 podman[117940]: 2026-02-01 14:56:17.8938539 +0000 UTC m=+0.170595450 container died d8837107384a3b069c31d2ea4c9963e04b8f12f5711f43a23de1d22088fb1600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:56:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d86fa3950f33da487b70c506788c87cc878b433b48497aedccb7fda6bdcd3cf0-merged.mount: Deactivated successfully.
Feb 01 14:56:17 compute-0 podman[117940]: 2026-02-01 14:56:17.923011885 +0000 UTC m=+0.199753435 container remove d8837107384a3b069c31d2ea4c9963e04b8f12f5711f43a23de1d22088fb1600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hellman, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 01 14:56:17 compute-0 systemd[1]: libpod-conmon-d8837107384a3b069c31d2ea4c9963e04b8f12f5711f43a23de1d22088fb1600.scope: Deactivated successfully.
Feb 01 14:56:18 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Feb 01 14:56:18 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Feb 01 14:56:18 compute-0 ceph-mon[75179]: 8.1a scrub starts
Feb 01 14:56:18 compute-0 ceph-mon[75179]: 8.1a scrub ok
Feb 01 14:56:18 compute-0 podman[118053]: 2026-02-01 14:56:18.056815712 +0000 UTC m=+0.054403485 container create 9ace389f7f4e28087ffc1cc95d71a1337c33546ea37e6a480ec18d4d9f7da6f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 01 14:56:18 compute-0 systemd[1]: Started libpod-conmon-9ace389f7f4e28087ffc1cc95d71a1337c33546ea37e6a480ec18d4d9f7da6f6.scope.
Feb 01 14:56:18 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:56:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b60266d076112be23c8eeae2adc6b97af407607191821759b11311a2794f461/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:56:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b60266d076112be23c8eeae2adc6b97af407607191821759b11311a2794f461/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:56:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b60266d076112be23c8eeae2adc6b97af407607191821759b11311a2794f461/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:56:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b60266d076112be23c8eeae2adc6b97af407607191821759b11311a2794f461/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:56:18 compute-0 podman[118053]: 2026-02-01 14:56:18.038354144 +0000 UTC m=+0.035941977 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:56:18 compute-0 podman[118053]: 2026-02-01 14:56:18.138579606 +0000 UTC m=+0.136167449 container init 9ace389f7f4e28087ffc1cc95d71a1337c33546ea37e6a480ec18d4d9f7da6f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 01 14:56:18 compute-0 podman[118053]: 2026-02-01 14:56:18.145363917 +0000 UTC m=+0.142951700 container start 9ace389f7f4e28087ffc1cc95d71a1337c33546ea37e6a480ec18d4d9f7da6f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 01 14:56:18 compute-0 podman[118053]: 2026-02-01 14:56:18.148692256 +0000 UTC m=+0.146280109 container attach 9ace389f7f4e28087ffc1cc95d71a1337c33546ea37e6a480ec18d4d9f7da6f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_feynman, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:56:18 compute-0 sudo[118152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkodwosysrchgpfvcoodgcysisleldoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957777.9461658-152-79824657587843/AnsiballZ_stat.py'
Feb 01 14:56:18 compute-0 sudo[118152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:18 compute-0 python3.9[118154]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:56:18 compute-0 sudo[118152]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:56:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:56:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:56:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:56:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:56:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:56:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 14:56:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 14:56:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 14:56:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 14:56:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 14:56:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 14:56:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 14:56:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 14:56:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 14:56:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 14:56:18 compute-0 sudo[118273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psjfneywmspmcwhsjeuksuezotzamdxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957777.9461658-152-79824657587843/AnsiballZ_file.py'
Feb 01 14:56:18 compute-0 sudo[118273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:18 compute-0 python3.9[118280]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:18 compute-0 sudo[118273]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:18 compute-0 lvm[118307]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 14:56:18 compute-0 lvm[118309]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 14:56:18 compute-0 lvm[118309]: VG ceph_vg1 finished
Feb 01 14:56:18 compute-0 lvm[118307]: VG ceph_vg0 finished
Feb 01 14:56:18 compute-0 lvm[118317]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 14:56:18 compute-0 lvm[118317]: VG ceph_vg2 finished
Feb 01 14:56:18 compute-0 mystifying_feynman[118103]: {}
Feb 01 14:56:18 compute-0 systemd[1]: libpod-9ace389f7f4e28087ffc1cc95d71a1337c33546ea37e6a480ec18d4d9f7da6f6.scope: Deactivated successfully.
Feb 01 14:56:18 compute-0 podman[118053]: 2026-02-01 14:56:18.929349576 +0000 UTC m=+0.926937349 container died 9ace389f7f4e28087ffc1cc95d71a1337c33546ea37e6a480ec18d4d9f7da6f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_feynman, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:56:18 compute-0 systemd[1]: libpod-9ace389f7f4e28087ffc1cc95d71a1337c33546ea37e6a480ec18d4d9f7da6f6.scope: Consumed 1.135s CPU time.
Feb 01 14:56:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b60266d076112be23c8eeae2adc6b97af407607191821759b11311a2794f461-merged.mount: Deactivated successfully.
Feb 01 14:56:18 compute-0 podman[118053]: 2026-02-01 14:56:18.969417654 +0000 UTC m=+0.967005437 container remove 9ace389f7f4e28087ffc1cc95d71a1337c33546ea37e6a480ec18d4d9f7da6f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_feynman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 01 14:56:18 compute-0 systemd[1]: libpod-conmon-9ace389f7f4e28087ffc1cc95d71a1337c33546ea37e6a480ec18d4d9f7da6f6.scope: Deactivated successfully.
Feb 01 14:56:18 compute-0 sudo[117841]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:56:19 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:56:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:56:19 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Feb 01 14:56:19 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:56:19 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Feb 01 14:56:19 compute-0 ceph-mon[75179]: 11.1e scrub starts
Feb 01 14:56:19 compute-0 ceph-mon[75179]: 11.1e scrub ok
Feb 01 14:56:19 compute-0 ceph-mon[75179]: pgmap v291: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:19 compute-0 ceph-mon[75179]: 3.12 scrub starts
Feb 01 14:56:19 compute-0 ceph-mon[75179]: 3.12 scrub ok
Feb 01 14:56:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:56:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:56:19 compute-0 sudo[118428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 14:56:19 compute-0 sudo[118428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:56:19 compute-0 sudo[118428]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:19 compute-0 sudo[118499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdgvjdjaxjbpwhiexumaicjzgqknxrbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957778.8929648-164-213822232584193/AnsiballZ_systemd.py'
Feb 01 14:56:19 compute-0 sudo[118499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:19 compute-0 python3.9[118501]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 14:56:19 compute-0 systemd[1]: Reloading.
Feb 01 14:56:19 compute-0 systemd-rc-local-generator[118524]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:56:19 compute-0 systemd-sysv-generator[118528]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:56:19 compute-0 systemd[1]: Starting Create netns directory...
Feb 01 14:56:19 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 01 14:56:19 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 01 14:56:19 compute-0 systemd[1]: Finished Create netns directory.
Feb 01 14:56:19 compute-0 sudo[118499]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:19 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:20 compute-0 ceph-mon[75179]: 11.19 scrub starts
Feb 01 14:56:20 compute-0 ceph-mon[75179]: 11.19 scrub ok
Feb 01 14:56:20 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Feb 01 14:56:20 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Feb 01 14:56:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:56:20 compute-0 python3.9[118691]: ansible-ansible.builtin.service_facts Invoked
Feb 01 14:56:20 compute-0 network[118708]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 01 14:56:20 compute-0 network[118709]: 'network-scripts' will be removed from distribution in near future.
Feb 01 14:56:20 compute-0 network[118710]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 01 14:56:21 compute-0 ceph-mon[75179]: pgmap v292: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:21 compute-0 ceph-mon[75179]: 2.15 scrub starts
Feb 01 14:56:21 compute-0 ceph-mon[75179]: 2.15 scrub ok
Feb 01 14:56:21 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Feb 01 14:56:21 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Feb 01 14:56:21 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:22 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.e scrub starts
Feb 01 14:56:22 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.e scrub ok
Feb 01 14:56:22 compute-0 ceph-mon[75179]: 2.17 scrub starts
Feb 01 14:56:22 compute-0 ceph-mon[75179]: 2.17 scrub ok
Feb 01 14:56:23 compute-0 ceph-mon[75179]: pgmap v293: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:23 compute-0 ceph-mon[75179]: 8.e scrub starts
Feb 01 14:56:23 compute-0 ceph-mon[75179]: 8.e scrub ok
Feb 01 14:56:23 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:24 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Feb 01 14:56:24 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Feb 01 14:56:24 compute-0 sudo[118971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxdfjjpeasazmrqunhkikelxrczlijfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957784.394031-190-41652375140548/AnsiballZ_stat.py'
Feb 01 14:56:24 compute-0 sudo[118971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:24 compute-0 python3.9[118973]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:56:24 compute-0 sudo[118971]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:25 compute-0 sudo[119049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqazofgvbuhehyljypdleajmuiqzezvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957784.394031-190-41652375140548/AnsiballZ_file.py'
Feb 01 14:56:25 compute-0 sudo[119049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:25 compute-0 ceph-mon[75179]: pgmap v294: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:25 compute-0 ceph-mon[75179]: 4.8 scrub starts
Feb 01 14:56:25 compute-0 ceph-mon[75179]: 4.8 scrub ok
Feb 01 14:56:25 compute-0 python3.9[119051]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:25 compute-0 sudo[119049]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:56:25 compute-0 sudo[119201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpqkhwmcqgrbkdaiuebtlxpvubtuvyeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957785.5075653-203-280868339034269/AnsiballZ_file.py'
Feb 01 14:56:25 compute-0 sudo[119201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:25 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:25 compute-0 python3.9[119203]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:26 compute-0 sudo[119201]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:26 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.e scrub starts
Feb 01 14:56:26 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.e scrub ok
Feb 01 14:56:26 compute-0 sudo[119353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryfhfkvwoqmfscsjcjfvuwinrxkcmdxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957786.1673706-211-23258529748352/AnsiballZ_stat.py'
Feb 01 14:56:26 compute-0 sudo[119353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:26 compute-0 python3.9[119355]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:56:26 compute-0 sudo[119353]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:26 compute-0 sudo[119431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyyufcuzibcmnkrnmlfffvhwwavmslek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957786.1673706-211-23258529748352/AnsiballZ_file.py'
Feb 01 14:56:26 compute-0 sudo[119431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:26 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.f scrub starts
Feb 01 14:56:27 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.f scrub ok
Feb 01 14:56:27 compute-0 python3.9[119433]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:27 compute-0 sudo[119431]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:27 compute-0 ceph-mon[75179]: pgmap v295: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:27 compute-0 ceph-mon[75179]: 10.e scrub starts
Feb 01 14:56:27 compute-0 ceph-mon[75179]: 10.e scrub ok
Feb 01 14:56:27 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Feb 01 14:56:27 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Feb 01 14:56:27 compute-0 sudo[119583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oejqphqprsdiyyikyvjnerackwmsoqee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957787.3740425-226-221726782273626/AnsiballZ_timezone.py'
Feb 01 14:56:27 compute-0 sudo[119583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:27 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 14:56:28 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.d scrub starts
Feb 01 14:56:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:56:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 14:56:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:56:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:56:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:56:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:56:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:56:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:56:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:56:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:56:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:56:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb 01 14:56:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:56:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:56:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:56:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 14:56:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:56:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 14:56:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:56:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:56:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:56:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 14:56:28 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.d scrub ok
Feb 01 14:56:28 compute-0 python3.9[119585]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb 01 14:56:28 compute-0 systemd[1]: Starting Time & Date Service...
Feb 01 14:56:28 compute-0 ceph-mon[75179]: 8.f scrub starts
Feb 01 14:56:28 compute-0 ceph-mon[75179]: 8.f scrub ok
Feb 01 14:56:28 compute-0 ceph-mon[75179]: 4.12 scrub starts
Feb 01 14:56:28 compute-0 ceph-mon[75179]: 4.12 scrub ok
Feb 01 14:56:28 compute-0 systemd[1]: Started Time & Date Service.
Feb 01 14:56:28 compute-0 sudo[119583]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:28 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Feb 01 14:56:28 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Feb 01 14:56:28 compute-0 sudo[119739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvycrxbqvfpgpaxmlfjknkpyaxtestht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957788.4547474-235-40212269596235/AnsiballZ_file.py'
Feb 01 14:56:28 compute-0 sudo[119739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:28 compute-0 python3.9[119741]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:28 compute-0 sudo[119739]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:29 compute-0 ceph-mon[75179]: pgmap v296: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:29 compute-0 ceph-mon[75179]: 10.d scrub starts
Feb 01 14:56:29 compute-0 ceph-mon[75179]: 10.d scrub ok
Feb 01 14:56:29 compute-0 ceph-mon[75179]: 8.1c scrub starts
Feb 01 14:56:29 compute-0 ceph-mon[75179]: 8.1c scrub ok
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:56:29.163490) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957789163644, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7156, "num_deletes": 251, "total_data_size": 9709209, "memory_usage": 9892608, "flush_reason": "Manual Compaction"}
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957789206535, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7680689, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7299, "table_properties": {"data_size": 7654229, "index_size": 17321, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8069, "raw_key_size": 74026, "raw_average_key_size": 23, "raw_value_size": 7592411, "raw_average_value_size": 2371, "num_data_blocks": 762, "num_entries": 3202, "num_filter_entries": 3202, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957400, "oldest_key_time": 1769957400, "file_creation_time": 1769957789, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 43113 microseconds, and 19074 cpu microseconds.
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:56:29.206608) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7680689 bytes OK
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:56:29.206655) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:56:29.208377) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:56:29.208404) EVENT_LOG_v1 {"time_micros": 1769957789208397, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:56:29.208454) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9678222, prev total WAL file size 9678222, number of live WAL files 2.
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:56:29.211413) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7500KB) 13(58KB) 8(1944B)]
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957789211583, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7742593, "oldest_snapshot_seqno": -1}
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3028 keys, 7695593 bytes, temperature: kUnknown
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957789252524, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7695593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7669481, "index_size": 17426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7621, "raw_key_size": 72466, "raw_average_key_size": 23, "raw_value_size": 7608931, "raw_average_value_size": 2512, "num_data_blocks": 768, "num_entries": 3028, "num_filter_entries": 3028, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769957789, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:56:29.252806) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7695593 bytes
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:56:29.254409) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.7 rd, 187.5 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.4, 0.0 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3317, records dropped: 289 output_compression: NoCompression
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:56:29.254442) EVENT_LOG_v1 {"time_micros": 1769957789254427, "job": 4, "event": "compaction_finished", "compaction_time_micros": 41037, "compaction_time_cpu_micros": 22285, "output_level": 6, "num_output_files": 1, "total_output_size": 7695593, "num_input_records": 3317, "num_output_records": 3028, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957789255779, "job": 4, "event": "table_file_deletion", "file_number": 19}
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957789255870, "job": 4, "event": "table_file_deletion", "file_number": 13}
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957789255916, "job": 4, "event": "table_file_deletion", "file_number": 8}
Feb 01 14:56:29 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:56:29.211250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 14:56:29 compute-0 sudo[119892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlsdtfpdhululfoyqsbcnosmjlmebpon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957789.0531921-243-280184792946036/AnsiballZ_stat.py'
Feb 01 14:56:29 compute-0 sudo[119892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:29 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Feb 01 14:56:29 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Feb 01 14:56:29 compute-0 python3.9[119894]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:56:29 compute-0 rsyslogd[1001]: imjournal from <np0005604375:ceph-osd>: begin to drop messages due to rate-limiting
Feb 01 14:56:29 compute-0 sudo[119892]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:29 compute-0 sudo[119970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffoctjlvgqqmkrhtmonxxoruifanlsuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957789.0531921-243-280184792946036/AnsiballZ_file.py'
Feb 01 14:56:29 compute-0 sudo[119970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:29 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:29 compute-0 python3.9[119972]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:29 compute-0 sudo[119970]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:30 compute-0 ceph-mon[75179]: 11.1f scrub starts
Feb 01 14:56:30 compute-0 ceph-mon[75179]: 11.1f scrub ok
Feb 01 14:56:30 compute-0 ceph-mon[75179]: pgmap v297: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:30 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Feb 01 14:56:30 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Feb 01 14:56:30 compute-0 sudo[120122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqzsmjndrpbvuvykcfykmzpzegeffpuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957790.1751497-255-125763001028077/AnsiballZ_stat.py'
Feb 01 14:56:30 compute-0 sudo[120122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:56:30 compute-0 python3.9[120124]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:56:30 compute-0 sudo[120122]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:30 compute-0 sudo[120200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhgwprvbdzwtiryaidepwkrqczlgrmeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957790.1751497-255-125763001028077/AnsiballZ_file.py'
Feb 01 14:56:30 compute-0 sudo[120200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:30 compute-0 python3.9[120202]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.8l5pfh3_ recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:31 compute-0 sudo[120200]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:31 compute-0 ceph-mon[75179]: 4.11 scrub starts
Feb 01 14:56:31 compute-0 ceph-mon[75179]: 4.11 scrub ok
Feb 01 14:56:31 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Feb 01 14:56:31 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Feb 01 14:56:31 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Feb 01 14:56:31 compute-0 sudo[120352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voaahojzbpckzwqdajezacewtwfbfqny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957791.1515014-267-157305390689344/AnsiballZ_stat.py'
Feb 01 14:56:31 compute-0 sudo[120352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:31 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Feb 01 14:56:31 compute-0 python3.9[120354]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:56:31 compute-0 sudo[120352]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:31 compute-0 sudo[120430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zntoweedsmbszhgtxjvazvjuyoettbjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957791.1515014-267-157305390689344/AnsiballZ_file.py'
Feb 01 14:56:31 compute-0 sudo[120430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:31 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:31 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Feb 01 14:56:31 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Feb 01 14:56:31 compute-0 python3.9[120432]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:31 compute-0 sudo[120430]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:32 compute-0 ceph-mon[75179]: 11.11 scrub starts
Feb 01 14:56:32 compute-0 ceph-mon[75179]: 11.11 scrub ok
Feb 01 14:56:32 compute-0 ceph-mon[75179]: 5.11 scrub starts
Feb 01 14:56:32 compute-0 ceph-mon[75179]: 5.11 scrub ok
Feb 01 14:56:32 compute-0 ceph-mon[75179]: pgmap v298: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:32 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Feb 01 14:56:32 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Feb 01 14:56:32 compute-0 sudo[120582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgkmjkpanpiabmisxranamqrjatwaqee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957792.1266253-280-216849031286118/AnsiballZ_command.py'
Feb 01 14:56:32 compute-0 sudo[120582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:32 compute-0 python3.9[120584]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:56:32 compute-0 sudo[120582]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:33 compute-0 sudo[120735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaxkgpmujtoimgzmrxzeksfsjpszecbl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769957792.819623-288-25005090402803/AnsiballZ_edpm_nftables_from_files.py'
Feb 01 14:56:33 compute-0 sudo[120735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:33 compute-0 ceph-mon[75179]: 10.15 scrub starts
Feb 01 14:56:33 compute-0 ceph-mon[75179]: 10.15 scrub ok
Feb 01 14:56:33 compute-0 ceph-mon[75179]: 3.11 scrub starts
Feb 01 14:56:33 compute-0 ceph-mon[75179]: 3.11 scrub ok
Feb 01 14:56:33 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Feb 01 14:56:33 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Feb 01 14:56:33 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Feb 01 14:56:33 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Feb 01 14:56:33 compute-0 python3[120737]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 01 14:56:33 compute-0 sudo[120735]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:33 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:33 compute-0 sudo[120887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfkqrpzoktpaeijpjpmiynaadxpqggys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957793.5780015-296-111470605461679/AnsiballZ_stat.py'
Feb 01 14:56:33 compute-0 sudo[120887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:34 compute-0 python3.9[120889]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:56:34 compute-0 sudo[120887]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:34 compute-0 ceph-mon[75179]: 10.1a scrub starts
Feb 01 14:56:34 compute-0 ceph-mon[75179]: 8.12 scrub starts
Feb 01 14:56:34 compute-0 ceph-mon[75179]: 10.1a scrub ok
Feb 01 14:56:34 compute-0 ceph-mon[75179]: 8.12 scrub ok
Feb 01 14:56:34 compute-0 ceph-mon[75179]: pgmap v299: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:34 compute-0 sudo[120965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxzhoillyomrvzolqpnohbirrsybbkzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957793.5780015-296-111470605461679/AnsiballZ_file.py'
Feb 01 14:56:34 compute-0 sudo[120965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:34 compute-0 python3.9[120967]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:34 compute-0 sudo[120965]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:35 compute-0 sudo[121117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvskmpupgqketcbaswmhhthhetaqyhhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957794.7457814-308-208461334453604/AnsiballZ_stat.py'
Feb 01 14:56:35 compute-0 sudo[121117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:35 compute-0 python3.9[121119]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:56:35 compute-0 sudo[121117]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:56:35 compute-0 sudo[121242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtkbvgohkkauyiwhgsepcuvvjqxduqdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957794.7457814-308-208461334453604/AnsiballZ_copy.py'
Feb 01 14:56:35 compute-0 sudo[121242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:35 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:35 compute-0 python3.9[121244]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957794.7457814-308-208461334453604/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:35 compute-0 sudo[121242]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:36 compute-0 sudo[121394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbrihqivrjgrdvalzqcqncyzcniyxauw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957796.1376257-323-245821698267489/AnsiballZ_stat.py'
Feb 01 14:56:36 compute-0 sudo[121394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:36 compute-0 python3.9[121396]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:56:36 compute-0 sudo[121394]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:36 compute-0 sudo[121472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzcuspiowlcgreuykykjbbqssszfedop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957796.1376257-323-245821698267489/AnsiballZ_file.py'
Feb 01 14:56:36 compute-0 sudo[121472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:36 compute-0 ceph-mon[75179]: pgmap v300: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:37 compute-0 python3.9[121474]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:37 compute-0 sudo[121472]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:37 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Feb 01 14:56:37 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Feb 01 14:56:37 compute-0 sudo[121624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vejwzshfjmbcpukowofuxmbsfmjvgawc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957797.2492805-335-6058101013807/AnsiballZ_stat.py'
Feb 01 14:56:37 compute-0 sudo[121624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:37 compute-0 python3.9[121626]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:56:37 compute-0 sudo[121624]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:37 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:37 compute-0 sudo[121702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwntggxvkasendenvkyommgxcouusmbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957797.2492805-335-6058101013807/AnsiballZ_file.py'
Feb 01 14:56:37 compute-0 sudo[121702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:38 compute-0 python3.9[121704]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:38 compute-0 sudo[121702]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:38 compute-0 sudo[121854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lziixedzhyxhtcjtqoysoqbwnexposlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957798.3125699-347-22568656438325/AnsiballZ_stat.py'
Feb 01 14:56:38 compute-0 sudo[121854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:38 compute-0 python3.9[121856]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:56:38 compute-0 sudo[121854]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:38 compute-0 ceph-mon[75179]: 5.13 scrub starts
Feb 01 14:56:38 compute-0 ceph-mon[75179]: 5.13 scrub ok
Feb 01 14:56:38 compute-0 ceph-mon[75179]: pgmap v301: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:39 compute-0 sudo[121932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cioquixtncibctbhxsewqxdhgxrfyhlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957798.3125699-347-22568656438325/AnsiballZ_file.py'
Feb 01 14:56:39 compute-0 sudo[121932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:39 compute-0 python3.9[121934]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:39 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Feb 01 14:56:39 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Feb 01 14:56:39 compute-0 sudo[121932]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:39 compute-0 sudo[122084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiepzujjheobprwscynqfojyfofrducu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957799.4289129-360-195292282308556/AnsiballZ_command.py'
Feb 01 14:56:39 compute-0 sudo[122084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:39 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:39 compute-0 python3.9[122086]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:56:39 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Feb 01 14:56:39 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Feb 01 14:56:39 compute-0 sudo[122084]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:56:40 compute-0 sudo[122239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxbsbouynppopsuszrbhvumucknkxhuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957800.1725025-368-180248585037156/AnsiballZ_blockinfile.py'
Feb 01 14:56:40 compute-0 sudo[122239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:40 compute-0 python3.9[122241]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:40 compute-0 sudo[122239]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:40 compute-0 ceph-mon[75179]: 4.10 scrub starts
Feb 01 14:56:40 compute-0 ceph-mon[75179]: 4.10 scrub ok
Feb 01 14:56:40 compute-0 ceph-mon[75179]: pgmap v302: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:40 compute-0 ceph-mon[75179]: 10.9 scrub starts
Feb 01 14:56:40 compute-0 ceph-mon[75179]: 10.9 scrub ok
Feb 01 14:56:41 compute-0 sudo[122391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rllagxmmlgluezpossesiexekdyymkll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957800.977005-377-261197361611152/AnsiballZ_file.py'
Feb 01 14:56:41 compute-0 sudo[122391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:41 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Feb 01 14:56:41 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Feb 01 14:56:41 compute-0 python3.9[122393]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:41 compute-0 sudo[122391]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:41 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:42 compute-0 sudo[122543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sotxmnmazsoopgpcxvvifczenrpnvvou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957801.7416253-377-112591943976090/AnsiballZ_file.py'
Feb 01 14:56:42 compute-0 sudo[122543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:42 compute-0 python3.9[122545]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:42 compute-0 sudo[122543]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:42 compute-0 sudo[122695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvdiiadelpdsjbrehlemmkidkneiqpck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957802.4076438-392-125248318751129/AnsiballZ_mount.py'
Feb 01 14:56:42 compute-0 sudo[122695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:42 compute-0 ceph-mon[75179]: 3.18 scrub starts
Feb 01 14:56:42 compute-0 ceph-mon[75179]: 3.18 scrub ok
Feb 01 14:56:42 compute-0 ceph-mon[75179]: pgmap v303: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:43 compute-0 python3.9[122697]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 01 14:56:43 compute-0 sudo[122695]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:43 compute-0 sudo[122847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awsdwqdekbuxuylpnvtzvqyqqpdxtxns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957803.1882353-392-116559249319543/AnsiballZ_mount.py'
Feb 01 14:56:43 compute-0 sudo[122847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:43 compute-0 python3.9[122849]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 01 14:56:43 compute-0 sudo[122847]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:43 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:43 compute-0 sshd-session[115072]: Connection closed by 192.168.122.30 port 35850
Feb 01 14:56:43 compute-0 sshd-session[115069]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:56:43 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Feb 01 14:56:43 compute-0 systemd[1]: session-39.scope: Consumed 26.766s CPU time.
Feb 01 14:56:43 compute-0 systemd-logind[786]: Session 39 logged out. Waiting for processes to exit.
Feb 01 14:56:43 compute-0 systemd-logind[786]: Removed session 39.
Feb 01 14:56:44 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Feb 01 14:56:44 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Feb 01 14:56:44 compute-0 ceph-mon[75179]: pgmap v304: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:45 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.e scrub starts
Feb 01 14:56:45 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.e scrub ok
Feb 01 14:56:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:56:45 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:45 compute-0 ceph-mon[75179]: 4.14 scrub starts
Feb 01 14:56:45 compute-0 ceph-mon[75179]: 4.14 scrub ok
Feb 01 14:56:45 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Feb 01 14:56:45 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Feb 01 14:56:46 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Feb 01 14:56:46 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Feb 01 14:56:46 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Feb 01 14:56:46 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Feb 01 14:56:46 compute-0 ceph-mon[75179]: 3.e scrub starts
Feb 01 14:56:46 compute-0 ceph-mon[75179]: 3.e scrub ok
Feb 01 14:56:46 compute-0 ceph-mon[75179]: pgmap v305: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:46 compute-0 ceph-mon[75179]: 8.6 scrub starts
Feb 01 14:56:46 compute-0 ceph-mon[75179]: 8.6 scrub ok
Feb 01 14:56:47 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Feb 01 14:56:47 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Feb 01 14:56:47 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.a scrub starts
Feb 01 14:56:48 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Feb 01 14:56:48 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Feb 01 14:56:48 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Feb 01 14:56:48 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Feb 01 14:56:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:48 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.a scrub ok
Feb 01 14:56:48 compute-0 ceph-mon[75179]: 7.1c scrub starts
Feb 01 14:56:48 compute-0 ceph-mon[75179]: 7.1c scrub ok
Feb 01 14:56:48 compute-0 ceph-mon[75179]: 6.5 scrub starts
Feb 01 14:56:48 compute-0 ceph-mon[75179]: 6.5 scrub ok
Feb 01 14:56:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:56:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:56:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:56:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:56:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:56:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:56:48 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Feb 01 14:56:48 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Feb 01 14:56:49 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Feb 01 14:56:49 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Feb 01 14:56:49 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Feb 01 14:56:49 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Feb 01 14:56:49 compute-0 ceph-mon[75179]: 10.12 scrub starts
Feb 01 14:56:49 compute-0 ceph-mon[75179]: 10.12 scrub ok
Feb 01 14:56:49 compute-0 ceph-mon[75179]: pgmap v306: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:49 compute-0 ceph-mon[75179]: 6.a scrub starts
Feb 01 14:56:49 compute-0 ceph-mon[75179]: 10.14 scrub starts
Feb 01 14:56:49 compute-0 ceph-mon[75179]: 10.14 scrub ok
Feb 01 14:56:49 compute-0 ceph-mon[75179]: 7.11 scrub starts
Feb 01 14:56:49 compute-0 ceph-mon[75179]: 7.11 scrub ok
Feb 01 14:56:49 compute-0 ceph-mon[75179]: 6.a scrub ok
Feb 01 14:56:49 compute-0 sshd-session[122874]: Accepted publickey for zuul from 192.168.122.30 port 47674 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:56:49 compute-0 systemd-logind[786]: New session 40 of user zuul.
Feb 01 14:56:49 compute-0 systemd[1]: Started Session 40 of User zuul.
Feb 01 14:56:49 compute-0 sshd-session[122874]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:56:49 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:49 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Feb 01 14:56:49 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Feb 01 14:56:50 compute-0 ceph-mon[75179]: 6.9 scrub starts
Feb 01 14:56:50 compute-0 ceph-mon[75179]: 6.9 scrub ok
Feb 01 14:56:50 compute-0 ceph-mon[75179]: 6.2 scrub starts
Feb 01 14:56:50 compute-0 ceph-mon[75179]: 6.2 scrub ok
Feb 01 14:56:50 compute-0 ceph-mon[75179]: 6.8 scrub starts
Feb 01 14:56:50 compute-0 ceph-mon[75179]: 6.8 scrub ok
Feb 01 14:56:50 compute-0 ceph-mon[75179]: pgmap v307: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:50 compute-0 sudo[123027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogvktpemannzmrjrzppadlgaiaoouehh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957809.8299303-16-224447734985924/AnsiballZ_tempfile.py'
Feb 01 14:56:50 compute-0 sudo[123027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:56:50 compute-0 python3.9[123029]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Feb 01 14:56:50 compute-0 sudo[123027]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:50 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Feb 01 14:56:50 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Feb 01 14:56:51 compute-0 sudo[123179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgurwzjfovtwkaivhnvnodwcnbxufqvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957810.6382236-28-45050197202394/AnsiballZ_stat.py'
Feb 01 14:56:51 compute-0 sudo[123179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:51 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Feb 01 14:56:51 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Feb 01 14:56:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 6.f scrub starts
Feb 01 14:56:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 6.f scrub ok
Feb 01 14:56:51 compute-0 ceph-mon[75179]: 6.7 scrub starts
Feb 01 14:56:51 compute-0 ceph-mon[75179]: 6.7 scrub ok
Feb 01 14:56:51 compute-0 python3.9[123181]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:56:51 compute-0 sudo[123179]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:51 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:51 compute-0 sudo[123333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxrjhlltcqjryesogehpbrysiwahppjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957811.52293-36-276254597744156/AnsiballZ_slurp.py'
Feb 01 14:56:51 compute-0 sudo[123333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:52 compute-0 python3.9[123335]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Feb 01 14:56:52 compute-0 sudo[123333]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:52 compute-0 ceph-mon[75179]: 6.3 scrub starts
Feb 01 14:56:52 compute-0 ceph-mon[75179]: 6.3 scrub ok
Feb 01 14:56:52 compute-0 ceph-mon[75179]: 6.6 scrub starts
Feb 01 14:56:52 compute-0 ceph-mon[75179]: 6.6 scrub ok
Feb 01 14:56:52 compute-0 ceph-mon[75179]: 6.f scrub starts
Feb 01 14:56:52 compute-0 ceph-mon[75179]: 6.f scrub ok
Feb 01 14:56:52 compute-0 ceph-mon[75179]: pgmap v308: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:52 compute-0 sudo[123485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbkefqsxuadnvshcstmxvsrpscbydvdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957812.3171935-44-280578094158899/AnsiballZ_stat.py'
Feb 01 14:56:52 compute-0 sudo[123485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:52 compute-0 python3.9[123487]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.sx404mqt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:56:52 compute-0 sudo[123485]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:52 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Feb 01 14:56:52 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Feb 01 14:56:53 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.e scrub starts
Feb 01 14:56:53 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.e scrub ok
Feb 01 14:56:53 compute-0 sudo[123610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgdjdrgzxlotxbmpjngivpuawkaalqok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957812.3171935-44-280578094158899/AnsiballZ_copy.py'
Feb 01 14:56:53 compute-0 sudo[123610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:53 compute-0 ceph-mon[75179]: 6.0 scrub starts
Feb 01 14:56:53 compute-0 ceph-mon[75179]: 6.0 scrub ok
Feb 01 14:56:53 compute-0 python3.9[123612]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.sx404mqt mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957812.3171935-44-280578094158899/.source.sx404mqt _original_basename=.qoepjed1 follow=False checksum=5dd92f65a34c73cdb75f1a4430851ce8bc57dfcd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:53 compute-0 sudo[123610]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:53 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:54 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Feb 01 14:56:54 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Feb 01 14:56:54 compute-0 sudo[123762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvprrpmwucshtkjxbcykfigsbnfbjyfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957813.7198625-59-168231653767020/AnsiballZ_setup.py'
Feb 01 14:56:54 compute-0 sudo[123762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:54 compute-0 ceph-mon[75179]: 9.e scrub starts
Feb 01 14:56:54 compute-0 ceph-mon[75179]: 9.e scrub ok
Feb 01 14:56:54 compute-0 ceph-mon[75179]: pgmap v309: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:54 compute-0 python3.9[123764]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:56:54 compute-0 sudo[123762]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.d scrub starts
Feb 01 14:56:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.d scrub ok
Feb 01 14:56:55 compute-0 sudo[123914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbpdnkekplhmkqkxkqhkwjazspbnrpyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957814.902259-68-270028345643996/AnsiballZ_blockinfile.py'
Feb 01 14:56:55 compute-0 sudo[123914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:56:55 compute-0 python3.9[123916]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCc91AYQnCiB0gaeezmTYoTbrfn13wkohxC7DIARmFIxyirGt426V9bgiFFpczr0aG/jVGnrXyqspzqVB5qhL9auJ/zaBQu1HuEMj/iSqvtp/5CDZvoCsolbRvc44zq2YNqAjmlgPQKe2f5MpaLGuLQIttz10Aj01eq50uvoj+Hccu0tBH2HrkQ6PphB9SaLI0ycAPr4B4WyPj9bCzJA9VYlxP6l4qkBqQjSDZLHnNDZP7N8pB38yfZB4EeE9v/ooH5aVJpDjV0Ciwtv4zQTv2W/HjYxaR9DsoVdVzUJKnzBZXW+kb2vE/A6rxP/+raWm+Z4jwydT2ZGCcAPe024SW6OUhi434WMJg15As435pj6vNzkfhYX2vPuIZed9Rue7qlD9kPRcg71YkvhFlja7MORqf5+fQtCfHTz9OakK3VATcSgFt4cP8UrBn+vqksDnD16t+njeWjWiJ84mM9yrOXBZblouKVTgDAkKsj+6dVItGIfTdsgn1Xo3eDknUU3Qk=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM5PgjrlIGkEPCJJDOYu9tmd12o/4td87MoNHh6uIuRZ
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNnAPVuUouOEBJ57nPy2aB3GgfV4SpHa2H6A23QhOI4mJOPaen6XNPSxMMgeo9r5YMVaTTaE35iZ3Xh9PT0kwJ4=
                                              create=True mode=0644 path=/tmp/ansible.sx404mqt state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:55 compute-0 ceph-mon[75179]: 9.8 scrub starts
Feb 01 14:56:55 compute-0 ceph-mon[75179]: 9.8 scrub ok
Feb 01 14:56:55 compute-0 sudo[123914]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:55 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:56 compute-0 sudo[124066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udamispwhmvatqgrfhdfyllurcaekukf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957815.714377-76-237148958939966/AnsiballZ_command.py'
Feb 01 14:56:56 compute-0 sudo[124066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:56 compute-0 python3.9[124068]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.sx404mqt' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:56:56 compute-0 sudo[124066]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:56 compute-0 ceph-mon[75179]: 6.d scrub starts
Feb 01 14:56:56 compute-0 ceph-mon[75179]: 6.d scrub ok
Feb 01 14:56:56 compute-0 ceph-mon[75179]: pgmap v310: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:56 compute-0 sudo[124220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wowswvoizhjempxnvlwrtttvasihvlkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957816.355839-84-226015681987241/AnsiballZ_file.py'
Feb 01 14:56:56 compute-0 sudo[124220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:56:56 compute-0 python3.9[124222]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.sx404mqt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:56:56 compute-0 sudo[124220]: pam_unix(sudo:session): session closed for user root
Feb 01 14:56:57 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Feb 01 14:56:57 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Feb 01 14:56:57 compute-0 sshd-session[122877]: Connection closed by 192.168.122.30 port 47674
Feb 01 14:56:57 compute-0 sshd-session[122874]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:56:57 compute-0 systemd-logind[786]: Session 40 logged out. Waiting for processes to exit.
Feb 01 14:56:57 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Feb 01 14:56:57 compute-0 systemd[1]: session-40.scope: Consumed 4.715s CPU time.
Feb 01 14:56:57 compute-0 systemd-logind[786]: Removed session 40.
Feb 01 14:56:57 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:58 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Feb 01 14:56:58 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Feb 01 14:56:58 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 01 14:56:58 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.e scrub starts
Feb 01 14:56:58 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.e scrub ok
Feb 01 14:56:58 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Feb 01 14:56:58 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Feb 01 14:56:58 compute-0 ceph-mon[75179]: 6.4 scrub starts
Feb 01 14:56:58 compute-0 ceph-mon[75179]: 6.4 scrub ok
Feb 01 14:56:58 compute-0 ceph-mon[75179]: pgmap v311: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:58 compute-0 ceph-mon[75179]: 9.11 scrub starts
Feb 01 14:56:58 compute-0 ceph-mon[75179]: 9.11 scrub ok
Feb 01 14:56:59 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:56:59 compute-0 ceph-mon[75179]: 6.e scrub starts
Feb 01 14:56:59 compute-0 ceph-mon[75179]: 6.e scrub ok
Feb 01 14:56:59 compute-0 ceph-mon[75179]: 9.17 scrub starts
Feb 01 14:56:59 compute-0 ceph-mon[75179]: 9.17 scrub ok
Feb 01 14:57:00 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Feb 01 14:57:00 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Feb 01 14:57:00 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Feb 01 14:57:00 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Feb 01 14:57:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:57:00 compute-0 ceph-mon[75179]: pgmap v312: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:00 compute-0 ceph-mon[75179]: 9.5 scrub starts
Feb 01 14:57:00 compute-0 ceph-mon[75179]: 9.5 scrub ok
Feb 01 14:57:01 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.c scrub starts
Feb 01 14:57:01 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.c scrub ok
Feb 01 14:57:01 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:01 compute-0 ceph-mon[75179]: 6.1 scrub starts
Feb 01 14:57:01 compute-0 ceph-mon[75179]: 6.1 scrub ok
Feb 01 14:57:02 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.b scrub starts
Feb 01 14:57:02 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.b scrub ok
Feb 01 14:57:02 compute-0 sshd-session[124250]: Accepted publickey for zuul from 192.168.122.30 port 38378 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:57:02 compute-0 systemd-logind[786]: New session 41 of user zuul.
Feb 01 14:57:02 compute-0 systemd[1]: Started Session 41 of User zuul.
Feb 01 14:57:02 compute-0 sshd-session[124250]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:57:02 compute-0 ceph-mon[75179]: 6.c scrub starts
Feb 01 14:57:02 compute-0 ceph-mon[75179]: 6.c scrub ok
Feb 01 14:57:02 compute-0 ceph-mon[75179]: pgmap v313: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:03 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Feb 01 14:57:03 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Feb 01 14:57:03 compute-0 python3.9[124403]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:57:03 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:04 compute-0 ceph-mon[75179]: 6.b scrub starts
Feb 01 14:57:04 compute-0 ceph-mon[75179]: 6.b scrub ok
Feb 01 14:57:04 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Feb 01 14:57:04 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Feb 01 14:57:04 compute-0 sudo[124557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlyrfyhksxailulztjanplnnrbjxvjlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957824.0661204-27-108635087629276/AnsiballZ_systemd.py'
Feb 01 14:57:04 compute-0 sudo[124557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:04 compute-0 python3.9[124559]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 01 14:57:04 compute-0 sudo[124557]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:05 compute-0 ceph-mon[75179]: 9.15 scrub starts
Feb 01 14:57:05 compute-0 ceph-mon[75179]: 9.15 scrub ok
Feb 01 14:57:05 compute-0 ceph-mon[75179]: pgmap v314: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:05 compute-0 ceph-mon[75179]: 9.16 scrub starts
Feb 01 14:57:05 compute-0 ceph-mon[75179]: 9.16 scrub ok
Feb 01 14:57:05 compute-0 sudo[124711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrlmsuafzdngmpgiimrkrwjtbdcudhvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957825.056267-35-24128757347155/AnsiballZ_systemd.py'
Feb 01 14:57:05 compute-0 sudo[124711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:57:05 compute-0 python3.9[124713]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 01 14:57:05 compute-0 sudo[124711]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:05 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:06 compute-0 sudo[124864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgasyclmcabzrgivnsonprnmmnhsfvbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957825.7731469-44-255909297522800/AnsiballZ_command.py'
Feb 01 14:57:06 compute-0 sudo[124864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:06 compute-0 python3.9[124866]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:57:06 compute-0 sudo[124864]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:06 compute-0 ceph-mon[75179]: pgmap v315: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:06 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.f scrub starts
Feb 01 14:57:06 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Feb 01 14:57:06 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.f scrub ok
Feb 01 14:57:06 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Feb 01 14:57:06 compute-0 sudo[125017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiyixjsnzkzazvydpkxyrifahtotlxkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957826.464928-52-60717967239127/AnsiballZ_stat.py'
Feb 01 14:57:06 compute-0 sudo[125017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:07 compute-0 python3.9[125019]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:57:07 compute-0 sudo[125017]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:07 compute-0 ceph-mon[75179]: 9.f scrub starts
Feb 01 14:57:07 compute-0 ceph-mon[75179]: 9.14 scrub starts
Feb 01 14:57:07 compute-0 ceph-mon[75179]: 9.f scrub ok
Feb 01 14:57:07 compute-0 ceph-mon[75179]: 9.14 scrub ok
Feb 01 14:57:07 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.c scrub starts
Feb 01 14:57:07 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.c scrub ok
Feb 01 14:57:07 compute-0 sudo[125169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kydzeqgqfjnoyhikfrjfkfwodbcnlsbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957827.2888014-61-251278416923670/AnsiballZ_file.py'
Feb 01 14:57:07 compute-0 sudo[125169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:07 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:07 compute-0 python3.9[125171]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:57:07 compute-0 sudo[125169]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:08 compute-0 sshd-session[124253]: Connection closed by 192.168.122.30 port 38378
Feb 01 14:57:08 compute-0 sshd-session[124250]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:57:08 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Feb 01 14:57:08 compute-0 systemd[1]: session-41.scope: Consumed 3.016s CPU time.
Feb 01 14:57:08 compute-0 systemd-logind[786]: Session 41 logged out. Waiting for processes to exit.
Feb 01 14:57:08 compute-0 systemd-logind[786]: Removed session 41.
Feb 01 14:57:08 compute-0 ceph-mon[75179]: 9.c scrub starts
Feb 01 14:57:08 compute-0 ceph-mon[75179]: 9.c scrub ok
Feb 01 14:57:08 compute-0 ceph-mon[75179]: pgmap v316: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:09 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Feb 01 14:57:09 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Feb 01 14:57:09 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:10 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Feb 01 14:57:10 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Feb 01 14:57:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:57:10 compute-0 ceph-mon[75179]: 9.7 scrub starts
Feb 01 14:57:10 compute-0 ceph-mon[75179]: 9.7 scrub ok
Feb 01 14:57:10 compute-0 ceph-mon[75179]: pgmap v317: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:11 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Feb 01 14:57:11 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Feb 01 14:57:11 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:11 compute-0 ceph-mon[75179]: 9.10 scrub starts
Feb 01 14:57:11 compute-0 ceph-mon[75179]: 9.10 scrub ok
Feb 01 14:57:12 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Feb 01 14:57:12 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Feb 01 14:57:12 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Feb 01 14:57:12 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Feb 01 14:57:12 compute-0 sshd-session[125196]: Accepted publickey for zuul from 192.168.122.30 port 38274 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:57:12 compute-0 systemd-logind[786]: New session 42 of user zuul.
Feb 01 14:57:12 compute-0 systemd[1]: Started Session 42 of User zuul.
Feb 01 14:57:12 compute-0 sshd-session[125196]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:57:12 compute-0 ceph-mon[75179]: 9.6 scrub starts
Feb 01 14:57:12 compute-0 ceph-mon[75179]: 9.6 scrub ok
Feb 01 14:57:12 compute-0 ceph-mon[75179]: pgmap v318: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:13 compute-0 python3.9[125349]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:57:13 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:13 compute-0 ceph-mon[75179]: 9.12 scrub starts
Feb 01 14:57:13 compute-0 ceph-mon[75179]: 9.12 scrub ok
Feb 01 14:57:13 compute-0 ceph-mon[75179]: 9.19 scrub starts
Feb 01 14:57:13 compute-0 ceph-mon[75179]: 9.19 scrub ok
Feb 01 14:57:14 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Feb 01 14:57:14 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Feb 01 14:57:14 compute-0 sudo[125503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-binkygzapkoxzbwqtpxdururfmtlwmpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957834.1872642-29-195491856594709/AnsiballZ_setup.py'
Feb 01 14:57:14 compute-0 sudo[125503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:14 compute-0 python3.9[125505]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 01 14:57:14 compute-0 ceph-mon[75179]: pgmap v319: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:15 compute-0 sudo[125503]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:15 compute-0 sudo[125587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olszintbkcpiakhioxbljzwhjplwxsyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957834.1872642-29-195491856594709/AnsiballZ_dnf.py'
Feb 01 14:57:15 compute-0 sudo[125587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:57:15 compute-0 python3.9[125589]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 01 14:57:15 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:15 compute-0 ceph-mon[75179]: 9.2 scrub starts
Feb 01 14:57:15 compute-0 ceph-mon[75179]: 9.2 scrub ok
Feb 01 14:57:16 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.b scrub starts
Feb 01 14:57:16 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.b scrub ok
Feb 01 14:57:16 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Feb 01 14:57:16 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Feb 01 14:57:16 compute-0 sudo[125587]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:16 compute-0 ceph-mon[75179]: pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:16 compute-0 ceph-mon[75179]: 9.b scrub starts
Feb 01 14:57:16 compute-0 ceph-mon[75179]: 9.b scrub ok
Feb 01 14:57:17 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Feb 01 14:57:17 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Feb 01 14:57:17 compute-0 python3.9[125740]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:57:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_14:57:17
Feb 01 14:57:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 14:57:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 14:57:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'backups', 'vms']
Feb 01 14:57:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 14:57:17 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:17 compute-0 ceph-mon[75179]: 9.0 scrub starts
Feb 01 14:57:17 compute-0 ceph-mon[75179]: 9.0 scrub ok
Feb 01 14:57:17 compute-0 ceph-mon[75179]: 9.9 scrub starts
Feb 01 14:57:17 compute-0 ceph-mon[75179]: 9.9 scrub ok
Feb 01 14:57:18 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.d scrub starts
Feb 01 14:57:18 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.d scrub ok
Feb 01 14:57:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:57:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:57:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:57:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:57:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:57:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:57:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 14:57:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 14:57:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 14:57:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 14:57:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 14:57:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 14:57:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 14:57:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 14:57:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 14:57:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 14:57:18 compute-0 python3.9[125891]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 01 14:57:18 compute-0 ceph-mon[75179]: pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:18 compute-0 ceph-mon[75179]: 9.d scrub starts
Feb 01 14:57:18 compute-0 ceph-mon[75179]: 9.d scrub ok
Feb 01 14:57:19 compute-0 sudo[125968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:57:19 compute-0 sudo[125968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:57:19 compute-0 sudo[125968]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:19 compute-0 sudo[125999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 14:57:19 compute-0 sudo[125999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:57:19 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.a scrub starts
Feb 01 14:57:19 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.a scrub ok
Feb 01 14:57:19 compute-0 python3.9[126093]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:57:19 compute-0 sudo[125999]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:57:19 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:57:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 14:57:19 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:57:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 14:57:19 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:57:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 14:57:19 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:57:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 14:57:19 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:57:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:57:19 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:57:19 compute-0 sudo[126147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:57:19 compute-0 sudo[126147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:57:19 compute-0 sudo[126147]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:19 compute-0 sudo[126195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 14:57:19 compute-0 sudo[126195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:57:19 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:19 compute-0 podman[126335]: 2026-02-01 14:57:19.974410962 +0000 UTC m=+0.041386588 container create 92394cceb682486b85e91a80a747b58f54ed243582edc530da43f277b7457fac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 01 14:57:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:57:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:57:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:57:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:57:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:57:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:57:20 compute-0 systemd[1]: Started libpod-conmon-92394cceb682486b85e91a80a747b58f54ed243582edc530da43f277b7457fac.scope.
Feb 01 14:57:20 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:57:20 compute-0 podman[126335]: 2026-02-01 14:57:20.044441845 +0000 UTC m=+0.111417491 container init 92394cceb682486b85e91a80a747b58f54ed243582edc530da43f277b7457fac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_beaver, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Feb 01 14:57:20 compute-0 python3.9[126322]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:57:20 compute-0 podman[126335]: 2026-02-01 14:57:20.051869215 +0000 UTC m=+0.118844851 container start 92394cceb682486b85e91a80a747b58f54ed243582edc530da43f277b7457fac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_beaver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Feb 01 14:57:20 compute-0 podman[126335]: 2026-02-01 14:57:19.959328307 +0000 UTC m=+0.026303963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:57:20 compute-0 podman[126335]: 2026-02-01 14:57:20.055852357 +0000 UTC m=+0.122828003 container attach 92394cceb682486b85e91a80a747b58f54ed243582edc530da43f277b7457fac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:57:20 compute-0 systemd[1]: libpod-92394cceb682486b85e91a80a747b58f54ed243582edc530da43f277b7457fac.scope: Deactivated successfully.
Feb 01 14:57:20 compute-0 sleepy_beaver[126351]: 167 167
Feb 01 14:57:20 compute-0 conmon[126351]: conmon 92394cceb682486b85e9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-92394cceb682486b85e91a80a747b58f54ed243582edc530da43f277b7457fac.scope/container/memory.events
Feb 01 14:57:20 compute-0 podman[126335]: 2026-02-01 14:57:20.059357926 +0000 UTC m=+0.126333592 container died 92394cceb682486b85e91a80a747b58f54ed243582edc530da43f277b7457fac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_beaver, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 01 14:57:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fbe917314c7cdc4e6cdee07a50baf68a4ee2690f4821032817aeaa110a67e1c-merged.mount: Deactivated successfully.
Feb 01 14:57:20 compute-0 podman[126335]: 2026-02-01 14:57:20.100272529 +0000 UTC m=+0.167248155 container remove 92394cceb682486b85e91a80a747b58f54ed243582edc530da43f277b7457fac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_beaver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:57:20 compute-0 systemd[1]: libpod-conmon-92394cceb682486b85e91a80a747b58f54ed243582edc530da43f277b7457fac.scope: Deactivated successfully.
Feb 01 14:57:20 compute-0 podman[126400]: 2026-02-01 14:57:20.215385723 +0000 UTC m=+0.045517394 container create 5d7cf7803324b2d287fc903164c27dc4487238237a4f58b7e7ccd94e2835414c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hermann, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:57:20 compute-0 systemd[1]: Started libpod-conmon-5d7cf7803324b2d287fc903164c27dc4487238237a4f58b7e7ccd94e2835414c.scope.
Feb 01 14:57:20 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0fb3d04e9b6cb28ee27a0fff5885a875eb576a7e3a91b84f529f095297407e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0fb3d04e9b6cb28ee27a0fff5885a875eb576a7e3a91b84f529f095297407e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0fb3d04e9b6cb28ee27a0fff5885a875eb576a7e3a91b84f529f095297407e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0fb3d04e9b6cb28ee27a0fff5885a875eb576a7e3a91b84f529f095297407e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0fb3d04e9b6cb28ee27a0fff5885a875eb576a7e3a91b84f529f095297407e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:57:20 compute-0 podman[126400]: 2026-02-01 14:57:20.188971328 +0000 UTC m=+0.019103059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:57:20 compute-0 podman[126400]: 2026-02-01 14:57:20.299714749 +0000 UTC m=+0.129846460 container init 5d7cf7803324b2d287fc903164c27dc4487238237a4f58b7e7ccd94e2835414c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Feb 01 14:57:20 compute-0 podman[126400]: 2026-02-01 14:57:20.306325456 +0000 UTC m=+0.136457107 container start 5d7cf7803324b2d287fc903164c27dc4487238237a4f58b7e7ccd94e2835414c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hermann, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 01 14:57:20 compute-0 podman[126400]: 2026-02-01 14:57:20.310055781 +0000 UTC m=+0.140187422 container attach 5d7cf7803324b2d287fc903164c27dc4487238237a4f58b7e7ccd94e2835414c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 01 14:57:20 compute-0 sshd-session[125199]: Connection closed by 192.168.122.30 port 38274
Feb 01 14:57:20 compute-0 sshd-session[125196]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:57:20 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Feb 01 14:57:20 compute-0 systemd[1]: session-42.scope: Consumed 5.025s CPU time.
Feb 01 14:57:20 compute-0 systemd-logind[786]: Session 42 logged out. Waiting for processes to exit.
Feb 01 14:57:20 compute-0 systemd-logind[786]: Removed session 42.
Feb 01 14:57:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:57:20 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Feb 01 14:57:20 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Feb 01 14:57:20 compute-0 tender_hermann[126416]: --> passed data devices: 0 physical, 3 LVM
Feb 01 14:57:20 compute-0 tender_hermann[126416]: --> All data devices are unavailable
Feb 01 14:57:20 compute-0 systemd[1]: libpod-5d7cf7803324b2d287fc903164c27dc4487238237a4f58b7e7ccd94e2835414c.scope: Deactivated successfully.
Feb 01 14:57:20 compute-0 podman[126400]: 2026-02-01 14:57:20.743815925 +0000 UTC m=+0.573947596 container died 5d7cf7803324b2d287fc903164c27dc4487238237a4f58b7e7ccd94e2835414c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 01 14:57:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca0fb3d04e9b6cb28ee27a0fff5885a875eb576a7e3a91b84f529f095297407e-merged.mount: Deactivated successfully.
Feb 01 14:57:20 compute-0 podman[126400]: 2026-02-01 14:57:20.795535123 +0000 UTC m=+0.625666794 container remove 5d7cf7803324b2d287fc903164c27dc4487238237a4f58b7e7ccd94e2835414c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hermann, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:57:20 compute-0 systemd[1]: libpod-conmon-5d7cf7803324b2d287fc903164c27dc4487238237a4f58b7e7ccd94e2835414c.scope: Deactivated successfully.
Feb 01 14:57:20 compute-0 sudo[126195]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:20 compute-0 sudo[126450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:57:20 compute-0 sudo[126450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:57:20 compute-0 sudo[126450]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:20 compute-0 sudo[126475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 14:57:20 compute-0 sudo[126475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:57:20 compute-0 ceph-mon[75179]: 9.a scrub starts
Feb 01 14:57:20 compute-0 ceph-mon[75179]: 9.a scrub ok
Feb 01 14:57:20 compute-0 ceph-mon[75179]: pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:21 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Feb 01 14:57:21 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Feb 01 14:57:21 compute-0 podman[126512]: 2026-02-01 14:57:21.3053453 +0000 UTC m=+0.052683376 container create a7393034552a5edd8b30efee2197f104fdf2892d372ece78615355bb44d93ad5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wilson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:57:21 compute-0 systemd[1]: Started libpod-conmon-a7393034552a5edd8b30efee2197f104fdf2892d372ece78615355bb44d93ad5.scope.
Feb 01 14:57:21 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:57:21 compute-0 podman[126512]: 2026-02-01 14:57:21.370711242 +0000 UTC m=+0.118049348 container init a7393034552a5edd8b30efee2197f104fdf2892d372ece78615355bb44d93ad5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wilson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:57:21 compute-0 podman[126512]: 2026-02-01 14:57:21.37524274 +0000 UTC m=+0.122580816 container start a7393034552a5edd8b30efee2197f104fdf2892d372ece78615355bb44d93ad5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:57:21 compute-0 podman[126512]: 2026-02-01 14:57:21.284220715 +0000 UTC m=+0.031558801 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:57:21 compute-0 laughing_wilson[126528]: 167 167
Feb 01 14:57:21 compute-0 podman[126512]: 2026-02-01 14:57:21.378563073 +0000 UTC m=+0.125901149 container attach a7393034552a5edd8b30efee2197f104fdf2892d372ece78615355bb44d93ad5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 01 14:57:21 compute-0 systemd[1]: libpod-a7393034552a5edd8b30efee2197f104fdf2892d372ece78615355bb44d93ad5.scope: Deactivated successfully.
Feb 01 14:57:21 compute-0 podman[126512]: 2026-02-01 14:57:21.379401357 +0000 UTC m=+0.126739463 container died a7393034552a5edd8b30efee2197f104fdf2892d372ece78615355bb44d93ad5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:57:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-dccdfd97c333992d087cc4e6511c8dafed9a7e7e5129b54a5b299f75d4d0ca05-merged.mount: Deactivated successfully.
Feb 01 14:57:21 compute-0 podman[126512]: 2026-02-01 14:57:21.418366925 +0000 UTC m=+0.165705001 container remove a7393034552a5edd8b30efee2197f104fdf2892d372ece78615355bb44d93ad5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wilson, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:57:21 compute-0 systemd[1]: libpod-conmon-a7393034552a5edd8b30efee2197f104fdf2892d372ece78615355bb44d93ad5.scope: Deactivated successfully.
Feb 01 14:57:21 compute-0 podman[126554]: 2026-02-01 14:57:21.542677888 +0000 UTC m=+0.041976714 container create d34071ab9ba741ef7dc01c82470751df5638cfdca53e78eaa552b3b38666e5de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_germain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Feb 01 14:57:21 compute-0 systemd[1]: Started libpod-conmon-d34071ab9ba741ef7dc01c82470751df5638cfdca53e78eaa552b3b38666e5de.scope.
Feb 01 14:57:21 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:57:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1797b2e8ca0dc9e4c992e6991ec51345b2af1c62e5fa068b7c0d012b1a0647fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:57:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1797b2e8ca0dc9e4c992e6991ec51345b2af1c62e5fa068b7c0d012b1a0647fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:57:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1797b2e8ca0dc9e4c992e6991ec51345b2af1c62e5fa068b7c0d012b1a0647fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:57:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1797b2e8ca0dc9e4c992e6991ec51345b2af1c62e5fa068b7c0d012b1a0647fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:57:21 compute-0 podman[126554]: 2026-02-01 14:57:21.616052946 +0000 UTC m=+0.115351812 container init d34071ab9ba741ef7dc01c82470751df5638cfdca53e78eaa552b3b38666e5de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_germain, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 01 14:57:21 compute-0 podman[126554]: 2026-02-01 14:57:21.524594409 +0000 UTC m=+0.023893285 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:57:21 compute-0 podman[126554]: 2026-02-01 14:57:21.622789266 +0000 UTC m=+0.122088082 container start d34071ab9ba741ef7dc01c82470751df5638cfdca53e78eaa552b3b38666e5de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Feb 01 14:57:21 compute-0 podman[126554]: 2026-02-01 14:57:21.626025707 +0000 UTC m=+0.125324573 container attach d34071ab9ba741ef7dc01c82470751df5638cfdca53e78eaa552b3b38666e5de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_germain, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Feb 01 14:57:21 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Feb 01 14:57:21 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Feb 01 14:57:21 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:21 compute-0 kind_germain[126570]: {
Feb 01 14:57:21 compute-0 kind_germain[126570]:     "0": [
Feb 01 14:57:21 compute-0 kind_germain[126570]:         {
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "devices": [
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "/dev/loop3"
Feb 01 14:57:21 compute-0 kind_germain[126570]:             ],
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "lv_name": "ceph_lv0",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "lv_size": "21470642176",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "name": "ceph_lv0",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "tags": {
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.cluster_name": "ceph",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.crush_device_class": "",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.encrypted": "0",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.objectstore": "bluestore",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.osd_id": "0",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.type": "block",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.vdo": "0",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.with_tpm": "0"
Feb 01 14:57:21 compute-0 kind_germain[126570]:             },
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "type": "block",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "vg_name": "ceph_vg0"
Feb 01 14:57:21 compute-0 kind_germain[126570]:         }
Feb 01 14:57:21 compute-0 kind_germain[126570]:     ],
Feb 01 14:57:21 compute-0 kind_germain[126570]:     "1": [
Feb 01 14:57:21 compute-0 kind_germain[126570]:         {
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "devices": [
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "/dev/loop4"
Feb 01 14:57:21 compute-0 kind_germain[126570]:             ],
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "lv_name": "ceph_lv1",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "lv_size": "21470642176",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "name": "ceph_lv1",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "tags": {
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.cluster_name": "ceph",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.crush_device_class": "",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.encrypted": "0",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.objectstore": "bluestore",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.osd_id": "1",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.type": "block",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.vdo": "0",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.with_tpm": "0"
Feb 01 14:57:21 compute-0 kind_germain[126570]:             },
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "type": "block",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "vg_name": "ceph_vg1"
Feb 01 14:57:21 compute-0 kind_germain[126570]:         }
Feb 01 14:57:21 compute-0 kind_germain[126570]:     ],
Feb 01 14:57:21 compute-0 kind_germain[126570]:     "2": [
Feb 01 14:57:21 compute-0 kind_germain[126570]:         {
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "devices": [
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "/dev/loop5"
Feb 01 14:57:21 compute-0 kind_germain[126570]:             ],
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "lv_name": "ceph_lv2",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "lv_size": "21470642176",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "name": "ceph_lv2",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "tags": {
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.cluster_name": "ceph",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.crush_device_class": "",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.encrypted": "0",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.objectstore": "bluestore",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.osd_id": "2",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.type": "block",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.vdo": "0",
Feb 01 14:57:21 compute-0 kind_germain[126570]:                 "ceph.with_tpm": "0"
Feb 01 14:57:21 compute-0 kind_germain[126570]:             },
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "type": "block",
Feb 01 14:57:21 compute-0 kind_germain[126570]:             "vg_name": "ceph_vg2"
Feb 01 14:57:21 compute-0 kind_germain[126570]:         }
Feb 01 14:57:21 compute-0 kind_germain[126570]:     ]
Feb 01 14:57:21 compute-0 kind_germain[126570]: }
Feb 01 14:57:21 compute-0 systemd[1]: libpod-d34071ab9ba741ef7dc01c82470751df5638cfdca53e78eaa552b3b38666e5de.scope: Deactivated successfully.
Feb 01 14:57:21 compute-0 conmon[126570]: conmon d34071ab9ba741ef7dc0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d34071ab9ba741ef7dc01c82470751df5638cfdca53e78eaa552b3b38666e5de.scope/container/memory.events
Feb 01 14:57:21 compute-0 podman[126554]: 2026-02-01 14:57:21.901236523 +0000 UTC m=+0.400535389 container died d34071ab9ba741ef7dc01c82470751df5638cfdca53e78eaa552b3b38666e5de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 01 14:57:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-1797b2e8ca0dc9e4c992e6991ec51345b2af1c62e5fa068b7c0d012b1a0647fb-merged.mount: Deactivated successfully.
Feb 01 14:57:21 compute-0 podman[126554]: 2026-02-01 14:57:21.950895323 +0000 UTC m=+0.450194179 container remove d34071ab9ba741ef7dc01c82470751df5638cfdca53e78eaa552b3b38666e5de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:57:21 compute-0 systemd[1]: libpod-conmon-d34071ab9ba741ef7dc01c82470751df5638cfdca53e78eaa552b3b38666e5de.scope: Deactivated successfully.
Feb 01 14:57:22 compute-0 ceph-mon[75179]: 9.18 scrub starts
Feb 01 14:57:22 compute-0 ceph-mon[75179]: 9.18 scrub ok
Feb 01 14:57:22 compute-0 ceph-mon[75179]: 9.4 scrub starts
Feb 01 14:57:22 compute-0 ceph-mon[75179]: 9.4 scrub ok
Feb 01 14:57:22 compute-0 sudo[126475]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:22 compute-0 sudo[126591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:57:22 compute-0 sudo[126591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:57:22 compute-0 sudo[126591]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:22 compute-0 sudo[126616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 14:57:22 compute-0 sudo[126616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:57:22 compute-0 podman[126653]: 2026-02-01 14:57:22.43519665 +0000 UTC m=+0.044511405 container create cfc1aeb80cb821bfe2403c75d48427a8aea6fbdf2714e905bbb8a05607d51bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:57:22 compute-0 systemd[1]: Started libpod-conmon-cfc1aeb80cb821bfe2403c75d48427a8aea6fbdf2714e905bbb8a05607d51bdc.scope.
Feb 01 14:57:22 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:57:22 compute-0 podman[126653]: 2026-02-01 14:57:22.410322829 +0000 UTC m=+0.019637624 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:57:22 compute-0 podman[126653]: 2026-02-01 14:57:22.521516023 +0000 UTC m=+0.130830768 container init cfc1aeb80cb821bfe2403c75d48427a8aea6fbdf2714e905bbb8a05607d51bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_pasteur, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:57:22 compute-0 podman[126653]: 2026-02-01 14:57:22.528033946 +0000 UTC m=+0.137348711 container start cfc1aeb80cb821bfe2403c75d48427a8aea6fbdf2714e905bbb8a05607d51bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:57:22 compute-0 podman[126653]: 2026-02-01 14:57:22.531826673 +0000 UTC m=+0.141141438 container attach cfc1aeb80cb821bfe2403c75d48427a8aea6fbdf2714e905bbb8a05607d51bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_pasteur, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 01 14:57:22 compute-0 cool_pasteur[126669]: 167 167
Feb 01 14:57:22 compute-0 systemd[1]: libpod-cfc1aeb80cb821bfe2403c75d48427a8aea6fbdf2714e905bbb8a05607d51bdc.scope: Deactivated successfully.
Feb 01 14:57:22 compute-0 conmon[126669]: conmon cfc1aeb80cb821bfe240 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cfc1aeb80cb821bfe2403c75d48427a8aea6fbdf2714e905bbb8a05607d51bdc.scope/container/memory.events
Feb 01 14:57:22 compute-0 podman[126653]: 2026-02-01 14:57:22.535530948 +0000 UTC m=+0.144845713 container died cfc1aeb80cb821bfe2403c75d48427a8aea6fbdf2714e905bbb8a05607d51bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_pasteur, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 01 14:57:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ad58f0497636bbfb6653acb235a9e1dbab6399e54cf0898a2b45647969c737a-merged.mount: Deactivated successfully.
Feb 01 14:57:22 compute-0 podman[126653]: 2026-02-01 14:57:22.578445807 +0000 UTC m=+0.187760562 container remove cfc1aeb80cb821bfe2403c75d48427a8aea6fbdf2714e905bbb8a05607d51bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_pasteur, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:57:22 compute-0 systemd[1]: libpod-conmon-cfc1aeb80cb821bfe2403c75d48427a8aea6fbdf2714e905bbb8a05607d51bdc.scope: Deactivated successfully.
Feb 01 14:57:22 compute-0 podman[126693]: 2026-02-01 14:57:22.721656133 +0000 UTC m=+0.035844721 container create 881bee51f660c5e5ad363daa63065dfc0a2f0693e1dc6ebf41947f8c0ce90df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:57:22 compute-0 systemd[1]: Started libpod-conmon-881bee51f660c5e5ad363daa63065dfc0a2f0693e1dc6ebf41947f8c0ce90df0.scope.
Feb 01 14:57:22 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/151b1b894e6d9c45c340940a52e3c6e315ab7025f71de307035de4416f15afac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/151b1b894e6d9c45c340940a52e3c6e315ab7025f71de307035de4416f15afac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/151b1b894e6d9c45c340940a52e3c6e315ab7025f71de307035de4416f15afac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/151b1b894e6d9c45c340940a52e3c6e315ab7025f71de307035de4416f15afac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:57:22 compute-0 podman[126693]: 2026-02-01 14:57:22.798177 +0000 UTC m=+0.112365688 container init 881bee51f660c5e5ad363daa63065dfc0a2f0693e1dc6ebf41947f8c0ce90df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_diffie, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 01 14:57:22 compute-0 podman[126693]: 2026-02-01 14:57:22.706680061 +0000 UTC m=+0.020868659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:57:22 compute-0 podman[126693]: 2026-02-01 14:57:22.803612483 +0000 UTC m=+0.117801091 container start 881bee51f660c5e5ad363daa63065dfc0a2f0693e1dc6ebf41947f8c0ce90df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_diffie, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:57:22 compute-0 podman[126693]: 2026-02-01 14:57:22.807727239 +0000 UTC m=+0.121915867 container attach 881bee51f660c5e5ad363daa63065dfc0a2f0693e1dc6ebf41947f8c0ce90df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:57:23 compute-0 ceph-mon[75179]: 9.13 scrub starts
Feb 01 14:57:23 compute-0 ceph-mon[75179]: 9.13 scrub ok
Feb 01 14:57:23 compute-0 ceph-mon[75179]: pgmap v323: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:23 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Feb 01 14:57:23 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Feb 01 14:57:23 compute-0 lvm[126789]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 14:57:23 compute-0 lvm[126788]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 14:57:23 compute-0 lvm[126788]: VG ceph_vg1 finished
Feb 01 14:57:23 compute-0 lvm[126789]: VG ceph_vg0 finished
Feb 01 14:57:23 compute-0 lvm[126791]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 14:57:23 compute-0 lvm[126791]: VG ceph_vg2 finished
Feb 01 14:57:23 compute-0 zen_diffie[126710]: {}
Feb 01 14:57:23 compute-0 systemd[1]: libpod-881bee51f660c5e5ad363daa63065dfc0a2f0693e1dc6ebf41947f8c0ce90df0.scope: Deactivated successfully.
Feb 01 14:57:23 compute-0 systemd[1]: libpod-881bee51f660c5e5ad363daa63065dfc0a2f0693e1dc6ebf41947f8c0ce90df0.scope: Consumed 1.024s CPU time.
Feb 01 14:57:23 compute-0 podman[126794]: 2026-02-01 14:57:23.57776866 +0000 UTC m=+0.023160394 container died 881bee51f660c5e5ad363daa63065dfc0a2f0693e1dc6ebf41947f8c0ce90df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 01 14:57:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-151b1b894e6d9c45c340940a52e3c6e315ab7025f71de307035de4416f15afac-merged.mount: Deactivated successfully.
Feb 01 14:57:23 compute-0 podman[126794]: 2026-02-01 14:57:23.610179073 +0000 UTC m=+0.055570777 container remove 881bee51f660c5e5ad363daa63065dfc0a2f0693e1dc6ebf41947f8c0ce90df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:57:23 compute-0 systemd[1]: libpod-conmon-881bee51f660c5e5ad363daa63065dfc0a2f0693e1dc6ebf41947f8c0ce90df0.scope: Deactivated successfully.
Feb 01 14:57:23 compute-0 sudo[126616]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:23 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:57:23 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:57:23 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:57:23 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:57:23 compute-0 sudo[126809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 14:57:23 compute-0 sudo[126809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:57:23 compute-0 sudo[126809]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:23 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:24 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Feb 01 14:57:24 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Feb 01 14:57:24 compute-0 ceph-mon[75179]: 9.1a scrub starts
Feb 01 14:57:24 compute-0 ceph-mon[75179]: 9.1a scrub ok
Feb 01 14:57:24 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:57:24 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:57:24 compute-0 ceph-mon[75179]: pgmap v324: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:57:25 compute-0 ceph-mon[75179]: 9.1f scrub starts
Feb 01 14:57:25 compute-0 ceph-mon[75179]: 9.1f scrub ok
Feb 01 14:57:25 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:26 compute-0 sshd-session[126834]: Accepted publickey for zuul from 192.168.122.30 port 50958 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:57:26 compute-0 systemd-logind[786]: New session 43 of user zuul.
Feb 01 14:57:26 compute-0 systemd[1]: Started Session 43 of User zuul.
Feb 01 14:57:26 compute-0 sshd-session[126834]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:57:26 compute-0 ceph-mon[75179]: pgmap v325: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:27 compute-0 python3.9[126987]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:57:27 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 14:57:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:57:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 14:57:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:57:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:57:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:57:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:57:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:57:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:57:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:57:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:57:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:57:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb 01 14:57:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:57:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:57:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:57:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 14:57:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:57:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 14:57:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:57:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:57:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:57:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 14:57:28 compute-0 sudo[127141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uytvqyxedjagiphaadzbxpgcrzyvuzhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957848.3948588-45-63600737739964/AnsiballZ_file.py'
Feb 01 14:57:28 compute-0 sudo[127141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:28 compute-0 ceph-mon[75179]: pgmap v326: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:28 compute-0 python3.9[127143]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:57:28 compute-0 sudo[127141]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:29 compute-0 sudo[127293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krfaqrmwgghjdhpwdlmrcadlkmjhklgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957849.1157694-45-172419912877929/AnsiballZ_file.py'
Feb 01 14:57:29 compute-0 sudo[127293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:29 compute-0 python3.9[127295]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:57:29 compute-0 sudo[127293]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:29 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:30 compute-0 sudo[127445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvgiwjtfzauthrsbydokvutgycloclvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957849.6964583-60-256949353950649/AnsiballZ_stat.py'
Feb 01 14:57:30 compute-0 sudo[127445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:30 compute-0 python3.9[127447]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:57:30 compute-0 sudo[127445]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:57:30 compute-0 sudo[127568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fohjfzuvixkacyzdvzphinangnhctcdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957849.6964583-60-256949353950649/AnsiballZ_copy.py'
Feb 01 14:57:30 compute-0 sudo[127568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:30 compute-0 ceph-mon[75179]: pgmap v327: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:30 compute-0 python3.9[127570]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957849.6964583-60-256949353950649/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=9260595ccfb8a737128d2f711a02e027536be6c5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:57:31 compute-0 sudo[127568]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:31 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Feb 01 14:57:31 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Feb 01 14:57:31 compute-0 sudo[127720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yezlsizwkpkhblojkuzdxjbtwychfyis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957851.1244102-60-184050800756193/AnsiballZ_stat.py'
Feb 01 14:57:31 compute-0 sudo[127720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:31 compute-0 python3.9[127722]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:57:31 compute-0 sudo[127720]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:31 compute-0 sudo[127843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orxqqymgfxjfdbtapyanlmwfzwpktbsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957851.1244102-60-184050800756193/AnsiballZ_copy.py'
Feb 01 14:57:31 compute-0 sudo[127843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:31 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:31 compute-0 ceph-mon[75179]: 9.1 scrub starts
Feb 01 14:57:31 compute-0 ceph-mon[75179]: 9.1 scrub ok
Feb 01 14:57:31 compute-0 python3.9[127845]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957851.1244102-60-184050800756193/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=c02a3680070da69434e9588f81266705f77270d9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:57:31 compute-0 sudo[127843]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:32 compute-0 sudo[127995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orlluqdbrsvcpqvfhwwsbfxhmsxeolhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957852.0789773-60-8651319724431/AnsiballZ_stat.py'
Feb 01 14:57:32 compute-0 sudo[127995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:32 compute-0 python3.9[127997]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:57:32 compute-0 sudo[127995]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:32 compute-0 sudo[128118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huhjthidnogebcnqzdozwphrcjamoljg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957852.0789773-60-8651319724431/AnsiballZ_copy.py'
Feb 01 14:57:32 compute-0 sudo[128118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:32 compute-0 ceph-mon[75179]: pgmap v328: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:33 compute-0 python3.9[128120]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957852.0789773-60-8651319724431/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0bc5b0da9078601d4164f19623c22f100a1c8d20 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:57:33 compute-0 sudo[128118]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:33 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Feb 01 14:57:33 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Feb 01 14:57:33 compute-0 sudo[128270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpftvsukvskirbgfmaktsrrxgrrwiari ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957853.191319-104-97075292548250/AnsiballZ_file.py'
Feb 01 14:57:33 compute-0 sudo[128270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:33 compute-0 python3.9[128272]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:57:33 compute-0 sudo[128270]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:33 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:33 compute-0 sudo[128422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tawbzjyztplhqcemsewruffckqwbldol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957853.667381-104-237713887653263/AnsiballZ_file.py'
Feb 01 14:57:33 compute-0 sudo[128422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:33 compute-0 ceph-mon[75179]: 9.3 scrub starts
Feb 01 14:57:33 compute-0 ceph-mon[75179]: 9.3 scrub ok
Feb 01 14:57:34 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Feb 01 14:57:34 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Feb 01 14:57:34 compute-0 python3.9[128424]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:57:34 compute-0 sudo[128422]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:34 compute-0 sudo[128574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shwdjsmfowrqikwwylbvejtjxsgrsinq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957854.315273-119-119879264879668/AnsiballZ_stat.py'
Feb 01 14:57:34 compute-0 sudo[128574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:34 compute-0 python3.9[128576]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:57:34 compute-0 sudo[128574]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:34 compute-0 ceph-mon[75179]: pgmap v329: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:34 compute-0 ceph-mon[75179]: 9.1d scrub starts
Feb 01 14:57:34 compute-0 ceph-mon[75179]: 9.1d scrub ok
Feb 01 14:57:35 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Feb 01 14:57:35 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Feb 01 14:57:35 compute-0 sudo[128697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgkjhcuygqkxnhyxktjvvrccllkplcmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957854.315273-119-119879264879668/AnsiballZ_copy.py'
Feb 01 14:57:35 compute-0 sudo[128697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:35 compute-0 python3.9[128699]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957854.315273-119-119879264879668/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d9a9fbb7f6b96ec38d80529ef834e71bee1ce1e3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:57:35 compute-0 sudo[128697]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:57:35 compute-0 sudo[128849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltrugwybjzepoqsrxlvadeolgihroqkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957855.4612663-119-210636477786709/AnsiballZ_stat.py'
Feb 01 14:57:35 compute-0 sudo[128849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:35 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:35 compute-0 python3.9[128851]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:57:35 compute-0 sudo[128849]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:35 compute-0 ceph-mon[75179]: 9.1c scrub starts
Feb 01 14:57:35 compute-0 ceph-mon[75179]: 9.1c scrub ok
Feb 01 14:57:36 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Feb 01 14:57:36 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Feb 01 14:57:36 compute-0 sudo[128972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxkswhlonjyiadhbdoqgedycfffmnwlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957855.4612663-119-210636477786709/AnsiballZ_copy.py'
Feb 01 14:57:36 compute-0 sudo[128972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:36 compute-0 python3.9[128974]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957855.4612663-119-210636477786709/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=97c61fae7d566de33b222fed68cdd2e88fe9d99f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:57:36 compute-0 sudo[128972]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:36 compute-0 sudo[129124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccptoodzwrdyhfracjnzoipjzwcokpaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957856.635603-119-115907931047911/AnsiballZ_stat.py'
Feb 01 14:57:36 compute-0 sudo[129124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:36 compute-0 ceph-mon[75179]: pgmap v330: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:36 compute-0 ceph-mon[75179]: 9.1e scrub starts
Feb 01 14:57:36 compute-0 ceph-mon[75179]: 9.1e scrub ok
Feb 01 14:57:37 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Feb 01 14:57:37 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Feb 01 14:57:37 compute-0 python3.9[129126]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:57:37 compute-0 sudo[129124]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:37 compute-0 sudo[129247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpzfufjzlszawdmvbnniametynudsfts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957856.635603-119-115907931047911/AnsiballZ_copy.py'
Feb 01 14:57:37 compute-0 sudo[129247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:37 compute-0 python3.9[129249]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957856.635603-119-115907931047911/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=78b88aa8c0515a4826235758c2211ffd97d95858 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:57:37 compute-0 sudo[129247]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:37 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:37 compute-0 ceph-mon[75179]: 9.1b scrub starts
Feb 01 14:57:37 compute-0 ceph-mon[75179]: 9.1b scrub ok
Feb 01 14:57:38 compute-0 sudo[129399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdgodkgllzcukjruhhngvzcyntacgdim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957857.8876646-163-88949993081594/AnsiballZ_file.py'
Feb 01 14:57:38 compute-0 sudo[129399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:38 compute-0 python3.9[129401]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:57:38 compute-0 sudo[129399]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:38 compute-0 sudo[129551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moxqnlsefnenzkhzrghaeywsztjycyjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957858.506148-163-231082555021608/AnsiballZ_file.py'
Feb 01 14:57:38 compute-0 sudo[129551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:38 compute-0 python3.9[129553]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:57:38 compute-0 sudo[129551]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:38 compute-0 ceph-mon[75179]: pgmap v331: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:39 compute-0 sudo[129703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lynyejtmjeyzwgobrrmezkjzxvvyuebh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957859.074639-178-111185563606894/AnsiballZ_stat.py'
Feb 01 14:57:39 compute-0 sudo[129703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:39 compute-0 python3.9[129705]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:57:39 compute-0 sudo[129703]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:39 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:39 compute-0 sudo[129826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txsvchrpfdvuhqgrvfmdyeudltfcysgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957859.074639-178-111185563606894/AnsiballZ_copy.py'
Feb 01 14:57:39 compute-0 sudo[129826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:40 compute-0 python3.9[129828]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957859.074639-178-111185563606894/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=99fed20d4344a268a2d56732cac3b434e83a9241 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:57:40 compute-0 sudo[129826]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:40 compute-0 sudo[129978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqbvepbnibzfmmgrmxdcknodobcntjfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957860.1533413-178-176043820767176/AnsiballZ_stat.py'
Feb 01 14:57:40 compute-0 sudo[129978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:57:40 compute-0 python3.9[129980]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:57:40 compute-0 sudo[129978]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:40 compute-0 sudo[130101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvgngksfsfamogxqzkxwbdfiyiozraki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957860.1533413-178-176043820767176/AnsiballZ_copy.py'
Feb 01 14:57:40 compute-0 sudo[130101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:41 compute-0 ceph-mon[75179]: pgmap v332: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:41 compute-0 python3.9[130103]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957860.1533413-178-176043820767176/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=97c61fae7d566de33b222fed68cdd2e88fe9d99f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:57:41 compute-0 sudo[130101]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:41 compute-0 sudo[130253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpyyzuhnwjfeqpbptjnirqwbaqehudyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957861.247738-178-159119285909242/AnsiballZ_stat.py'
Feb 01 14:57:41 compute-0 sudo[130253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:41 compute-0 sshd-session[71347]: Received disconnect from 38.102.83.245 port 41614:11: disconnected by user
Feb 01 14:57:41 compute-0 sshd-session[71347]: Disconnected from user zuul 38.102.83.245 port 41614
Feb 01 14:57:41 compute-0 sshd-session[71344]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:57:41 compute-0 systemd-logind[786]: Session 17 logged out. Waiting for processes to exit.
Feb 01 14:57:41 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Feb 01 14:57:41 compute-0 systemd[1]: session-17.scope: Consumed 1min 25.461s CPU time.
Feb 01 14:57:41 compute-0 systemd-logind[786]: Removed session 17.
Feb 01 14:57:41 compute-0 python3.9[130255]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:57:41 compute-0 sudo[130253]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:41 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:42 compute-0 sudo[130376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xldwohxctydertdgihprewoqewmoeyve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957861.247738-178-159119285909242/AnsiballZ_copy.py'
Feb 01 14:57:42 compute-0 sudo[130376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:42 compute-0 python3.9[130378]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957861.247738-178-159119285909242/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=30f43610e6f0292bc75a448f77a11052c29b44fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:57:42 compute-0 sudo[130376]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:43 compute-0 ceph-mon[75179]: pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:43 compute-0 sudo[130528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzhvvzftymqmwcazyrrwxsriwwhwnyvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957862.9766598-238-26716449050327/AnsiballZ_file.py'
Feb 01 14:57:43 compute-0 sudo[130528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:43 compute-0 python3.9[130530]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:57:43 compute-0 sudo[130528]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:43 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:43 compute-0 sudo[130680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkktpmyfgnchgcgzpxerazhegxjipjvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957863.6507335-246-56901500974547/AnsiballZ_stat.py'
Feb 01 14:57:43 compute-0 sudo[130680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:44 compute-0 python3.9[130682]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:57:44 compute-0 sudo[130680]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:44 compute-0 rsyslogd[1001]: imjournal: 970 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb 01 14:57:44 compute-0 sudo[130803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxllixrepoohxlkyzjmcgesdvtbjeshq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957863.6507335-246-56901500974547/AnsiballZ_copy.py'
Feb 01 14:57:44 compute-0 sudo[130803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:44 compute-0 python3.9[130805]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957863.6507335-246-56901500974547/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aa242a09ed097a69fc2e0c42a39abd6f1899daab backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:57:44 compute-0 sudo[130803]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:45 compute-0 ceph-mon[75179]: pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:45 compute-0 sudo[130955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcgkeyscpmxeryhmhqydgnpbtyjmyguy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957864.8415117-262-67544268767721/AnsiballZ_file.py'
Feb 01 14:57:45 compute-0 sudo[130955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:45 compute-0 python3.9[130957]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:57:45 compute-0 sudo[130955]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:57:45 compute-0 sudo[131107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqvpordfteatbddxobtnmdnuouztkpso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957865.4107182-270-97589979076498/AnsiballZ_stat.py'
Feb 01 14:57:45 compute-0 sudo[131107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:45 compute-0 python3.9[131109]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:57:45 compute-0 sudo[131107]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:45 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:46 compute-0 sudo[131230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtjpakamdvzdrmrgulznltlpddenhiov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957865.4107182-270-97589979076498/AnsiballZ_copy.py'
Feb 01 14:57:46 compute-0 sudo[131230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:46 compute-0 python3.9[131232]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957865.4107182-270-97589979076498/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aa242a09ed097a69fc2e0c42a39abd6f1899daab backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:57:46 compute-0 sudo[131230]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:46 compute-0 sudo[131382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxbycigcnnknkyyxzqjwsbrpvahrkfle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957866.5146651-286-152302998556393/AnsiballZ_file.py'
Feb 01 14:57:46 compute-0 sudo[131382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:46 compute-0 python3.9[131384]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:57:46 compute-0 sudo[131382]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:47 compute-0 ceph-mon[75179]: pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:47 compute-0 sudo[131534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wflpjhafjijyifnosdenazjfoxifqpqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957867.1387234-294-234283495892471/AnsiballZ_stat.py'
Feb 01 14:57:47 compute-0 sudo[131534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:47 compute-0 python3.9[131536]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:57:47 compute-0 sudo[131534]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:47 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:48 compute-0 sudo[131657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwbxjjmrddvytziefablprcbgbjwbhia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957867.1387234-294-234283495892471/AnsiballZ_copy.py'
Feb 01 14:57:48 compute-0 sudo[131657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:48 compute-0 python3.9[131659]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957867.1387234-294-234283495892471/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aa242a09ed097a69fc2e0c42a39abd6f1899daab backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:57:48 compute-0 sudo[131657]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:57:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:57:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:57:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:57:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:57:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:57:48 compute-0 sudo[131809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktcqcgrfykzcuhagbbjuqgbnrwctvxri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957868.5557659-310-256612449492203/AnsiballZ_file.py'
Feb 01 14:57:48 compute-0 sudo[131809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:49 compute-0 ceph-mon[75179]: pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:49 compute-0 python3.9[131811]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:57:49 compute-0 sudo[131809]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:49 compute-0 sudo[131961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlzaupsbmhefdivoawjsgppgzghgteev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957869.2526317-318-93359104206061/AnsiballZ_stat.py'
Feb 01 14:57:49 compute-0 sudo[131961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:49 compute-0 python3.9[131963]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:57:49 compute-0 sudo[131961]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:49 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:49 compute-0 sudo[132084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bakiefxserrirumvloqngvjqssslutoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957869.2526317-318-93359104206061/AnsiballZ_copy.py'
Feb 01 14:57:49 compute-0 sudo[132084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:50 compute-0 python3.9[132086]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957869.2526317-318-93359104206061/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aa242a09ed097a69fc2e0c42a39abd6f1899daab backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:57:50 compute-0 sudo[132084]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:57:50 compute-0 sudo[132236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eevabdbjjzqglzczidjzvhjambldhwwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957870.349548-334-16163874706038/AnsiballZ_file.py'
Feb 01 14:57:50 compute-0 sudo[132236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:50 compute-0 python3.9[132238]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:57:50 compute-0 sudo[132236]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:51 compute-0 ceph-mon[75179]: pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:51 compute-0 sudo[132388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccolirzffhpqplfsdoozzeazozmiyzdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957870.8745797-342-183848414117328/AnsiballZ_stat.py'
Feb 01 14:57:51 compute-0 sudo[132388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:51 compute-0 python3.9[132390]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:57:51 compute-0 sudo[132388]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:51 compute-0 sudo[132511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afwudnqtvyehtrsqisxicrfcqkiqsuwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957870.8745797-342-183848414117328/AnsiballZ_copy.py'
Feb 01 14:57:51 compute-0 sudo[132511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:51 compute-0 python3.9[132513]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957870.8745797-342-183848414117328/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aa242a09ed097a69fc2e0c42a39abd6f1899daab backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:57:51 compute-0 sudo[132511]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:51 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:52 compute-0 sudo[132663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nffejxoimwilsyqqdsojdpzkvtaxoonj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957871.9282966-358-182459392557214/AnsiballZ_file.py'
Feb 01 14:57:52 compute-0 sudo[132663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:52 compute-0 python3.9[132665]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:57:52 compute-0 sudo[132663]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:52 compute-0 sudo[132815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcwzopkpplifevptvcycqykzgzbkxksr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957872.5339515-366-1286432164278/AnsiballZ_stat.py'
Feb 01 14:57:52 compute-0 sudo[132815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:52 compute-0 python3.9[132817]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:57:52 compute-0 sudo[132815]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:53 compute-0 ceph-mon[75179]: pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:53 compute-0 sudo[132938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjtwnswduazmjcfckajrupnnyzjafkyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957872.5339515-366-1286432164278/AnsiballZ_copy.py'
Feb 01 14:57:53 compute-0 sudo[132938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:57:53 compute-0 python3.9[132940]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957872.5339515-366-1286432164278/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aa242a09ed097a69fc2e0c42a39abd6f1899daab backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:57:53 compute-0 sudo[132938]: pam_unix(sudo:session): session closed for user root
Feb 01 14:57:53 compute-0 sshd-session[126837]: Connection closed by 192.168.122.30 port 50958
Feb 01 14:57:53 compute-0 sshd-session[126834]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:57:53 compute-0 systemd-logind[786]: Session 43 logged out. Waiting for processes to exit.
Feb 01 14:57:53 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Feb 01 14:57:53 compute-0 systemd[1]: session-43.scope: Consumed 19.458s CPU time.
Feb 01 14:57:53 compute-0 systemd-logind[786]: Removed session 43.
Feb 01 14:57:53 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:55 compute-0 ceph-mon[75179]: pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:57:55 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:57 compute-0 ceph-mon[75179]: pgmap v340: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:57 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:59 compute-0 ceph-mon[75179]: pgmap v341: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:59 compute-0 sshd-session[132965]: Accepted publickey for zuul from 192.168.122.30 port 36890 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:57:59 compute-0 systemd-logind[786]: New session 44 of user zuul.
Feb 01 14:57:59 compute-0 systemd[1]: Started Session 44 of User zuul.
Feb 01 14:57:59 compute-0 sshd-session[132965]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:57:59 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:57:59 compute-0 sudo[133118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctpueohowpbduvnqlvkfikuvxinkctio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957879.57267-17-209130052759983/AnsiballZ_file.py'
Feb 01 14:57:59 compute-0 sudo[133118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:00 compute-0 python3.9[133120]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:58:00 compute-0 sudo[133118]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:58:00 compute-0 sudo[133270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhwnabbletgykjcshialcrfmcbintkjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957880.33819-29-54586483380579/AnsiballZ_stat.py'
Feb 01 14:58:00 compute-0 sudo[133270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:00 compute-0 python3.9[133272]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:58:00 compute-0 sudo[133270]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:01 compute-0 ceph-mon[75179]: pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:01 compute-0 sudo[133393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzwgmwjtfqlaltwwfzmddjvxesfiuupo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957880.33819-29-54586483380579/AnsiballZ_copy.py'
Feb 01 14:58:01 compute-0 sudo[133393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:01 compute-0 python3.9[133395]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957880.33819-29-54586483380579/.source.conf _original_basename=ceph.conf follow=False checksum=15e400aca5823242b048f6d77e32d66f71f9194c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:58:01 compute-0 sudo[133393]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:01 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:01 compute-0 sudo[133545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qljostsejbzmmsofbisclddoowxrxncm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957881.7052343-29-128662861448092/AnsiballZ_stat.py'
Feb 01 14:58:01 compute-0 sudo[133545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:02 compute-0 python3.9[133547]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:58:02 compute-0 sudo[133545]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:02 compute-0 sudo[133668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omddqbzsrhnjdazpojiqgoqjekkytxpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957881.7052343-29-128662861448092/AnsiballZ_copy.py'
Feb 01 14:58:02 compute-0 sudo[133668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:02 compute-0 python3.9[133670]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957881.7052343-29-128662861448092/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=9e80b5c3ad70771b2808c3ea209191214d8953f2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:58:02 compute-0 sudo[133668]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:02 compute-0 sshd-session[132968]: Connection closed by 192.168.122.30 port 36890
Feb 01 14:58:02 compute-0 sshd-session[132965]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:58:02 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Feb 01 14:58:02 compute-0 systemd[1]: session-44.scope: Consumed 2.100s CPU time.
Feb 01 14:58:02 compute-0 systemd-logind[786]: Session 44 logged out. Waiting for processes to exit.
Feb 01 14:58:02 compute-0 systemd-logind[786]: Removed session 44.
Feb 01 14:58:03 compute-0 ceph-mon[75179]: pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:03 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:05 compute-0 ceph-mon[75179]: pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:58:05 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:07 compute-0 ceph-mon[75179]: pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:07 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:08 compute-0 sshd-session[133695]: Accepted publickey for zuul from 192.168.122.30 port 37556 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:58:08 compute-0 systemd-logind[786]: New session 45 of user zuul.
Feb 01 14:58:08 compute-0 systemd[1]: Started Session 45 of User zuul.
Feb 01 14:58:08 compute-0 sshd-session[133695]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:58:09 compute-0 ceph-mon[75179]: pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:09 compute-0 python3.9[133848]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:58:09 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:10 compute-0 sudo[134002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opqwkhxxazljkydqnqvieiiynpsusoli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957890.0043018-29-83042778291220/AnsiballZ_file.py'
Feb 01 14:58:10 compute-0 sudo[134002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:58:10 compute-0 python3.9[134004]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:58:10 compute-0 sudo[134002]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:10 compute-0 sudo[134154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsodhmdjcxisquahbmvuuedwfafceuhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957890.7730515-29-170328829249217/AnsiballZ_file.py'
Feb 01 14:58:10 compute-0 sudo[134154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:11 compute-0 python3.9[134156]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:58:11 compute-0 sudo[134154]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:11 compute-0 ceph-mon[75179]: pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:11 compute-0 python3.9[134306]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:58:11 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:12 compute-0 sudo[134456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwmcbeoeglxjzbefitfmbovgwqkfjolx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957892.0230408-52-62584623379344/AnsiballZ_seboolean.py'
Feb 01 14:58:12 compute-0 sudo[134456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:12 compute-0 python3.9[134458]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb 01 14:58:13 compute-0 ceph-mon[75179]: pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:13 compute-0 sudo[134456]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:13 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:14 compute-0 sudo[134612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdilnzzmmshjtstzeqiomhtdubumpram ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957893.9919343-62-112069988960973/AnsiballZ_setup.py'
Feb 01 14:58:14 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Feb 01 14:58:14 compute-0 sudo[134612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:14 compute-0 python3.9[134614]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 01 14:58:14 compute-0 sudo[134612]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:15 compute-0 sudo[134696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdtknsdodjdnpkkapptjbafkrgxftivr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957893.9919343-62-112069988960973/AnsiballZ_dnf.py'
Feb 01 14:58:15 compute-0 sudo[134696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:15 compute-0 ceph-mon[75179]: pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:15 compute-0 python3.9[134698]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 14:58:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:58:15 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:16 compute-0 sudo[134696]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:17 compute-0 sudo[134849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfcemiidubjiogajioalshnmecvqvkbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957896.5808156-74-236221711531323/AnsiballZ_systemd.py'
Feb 01 14:58:17 compute-0 sudo[134849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:17 compute-0 ceph-mon[75179]: pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:17 compute-0 python3.9[134851]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 01 14:58:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_14:58:17
Feb 01 14:58:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 14:58:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 14:58:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'backups', 'volumes', 'images', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta']
Feb 01 14:58:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 14:58:17 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:18 compute-0 ceph-mon[75179]: pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:58:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:58:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:58:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:58:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:58:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:58:18 compute-0 sudo[134849]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 14:58:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 14:58:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 14:58:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 14:58:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 14:58:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 14:58:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 14:58:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 14:58:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 14:58:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 14:58:19 compute-0 sudo[135004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glbucdmgbtefyehupkkqczftvywftejd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769957898.6526747-82-86591642774182/AnsiballZ_edpm_nftables_snippet.py'
Feb 01 14:58:19 compute-0 sudo[135004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:19 compute-0 python3[135006]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Feb 01 14:58:19 compute-0 sudo[135004]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:19 compute-0 sudo[135156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yusrgkfxaznvyriadjomkvtzxeqlhfpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957899.4020593-91-153406903558854/AnsiballZ_file.py'
Feb 01 14:58:19 compute-0 sudo[135156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:19 compute-0 python3.9[135158]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:58:19 compute-0 sudo[135156]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:19 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:20 compute-0 sudo[135308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzkwfauqxddtgoaqdqouhhwulemuqvzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957899.9304624-99-82477405781919/AnsiballZ_stat.py'
Feb 01 14:58:20 compute-0 sudo[135308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:58:20 compute-0 python3.9[135310]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:58:20 compute-0 sudo[135308]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:20 compute-0 sudo[135386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wspntiiachmdijbucsawpsybjefmzqug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957899.9304624-99-82477405781919/AnsiballZ_file.py'
Feb 01 14:58:20 compute-0 sudo[135386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:20 compute-0 python3.9[135388]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:58:20 compute-0 sudo[135386]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:20 compute-0 ceph-mon[75179]: pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:21 compute-0 sudo[135538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-freftjnwgydxxznrildgixnibxqrnkcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957900.970894-111-180993592631262/AnsiballZ_stat.py'
Feb 01 14:58:21 compute-0 sudo[135538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:21 compute-0 python3.9[135540]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:58:21 compute-0 sudo[135538]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:21 compute-0 sudo[135616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxwiofddtaptxbrpdseldltyrccnvynt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957900.970894-111-180993592631262/AnsiballZ_file.py'
Feb 01 14:58:21 compute-0 sudo[135616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:21 compute-0 python3.9[135618]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.9gia48h1 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:58:21 compute-0 sudo[135616]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:21 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:22 compute-0 sudo[135768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sghlcgdlbpryhazdkegxblrbtbqpnvlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957901.9160855-123-76821793382360/AnsiballZ_stat.py'
Feb 01 14:58:22 compute-0 sudo[135768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:22 compute-0 python3.9[135770]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:58:22 compute-0 sudo[135768]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:22 compute-0 sudo[135846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpietfsbpqestptkaxggmatfecgpfzew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957901.9160855-123-76821793382360/AnsiballZ_file.py'
Feb 01 14:58:22 compute-0 sudo[135846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:22 compute-0 python3.9[135848]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:58:22 compute-0 sudo[135846]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:22 compute-0 ceph-mon[75179]: pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:23 compute-0 sudo[135998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuwhknvthktcqpqswdggmmddvckfjggo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957902.8569913-136-38109962857218/AnsiballZ_command.py'
Feb 01 14:58:23 compute-0 sudo[135998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:23 compute-0 python3.9[136000]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:58:23 compute-0 sudo[135998]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:23 compute-0 sudo[136078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:58:23 compute-0 sudo[136078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:58:23 compute-0 sudo[136078]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:23 compute-0 sudo[136103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 14:58:23 compute-0 sudo[136103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:58:23 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:23 compute-0 sudo[136201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgcakwahhsmkvjvsuhvagbdpjenphufe ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769957903.5916245-144-74990081213073/AnsiballZ_edpm_nftables_from_files.py'
Feb 01 14:58:23 compute-0 sudo[136201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:24 compute-0 python3[136203]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 01 14:58:24 compute-0 sudo[136201]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:24 compute-0 sudo[136103]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:24 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:58:24 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:58:24 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 14:58:24 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:58:24 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 14:58:24 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:58:24 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 14:58:24 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:58:24 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 14:58:24 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:58:24 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:58:24 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:58:24 compute-0 sudo[136268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:58:24 compute-0 sudo[136268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:58:24 compute-0 sudo[136268]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:24 compute-0 sudo[136316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 14:58:24 compute-0 sudo[136316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:58:24 compute-0 sudo[136435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjhzplvyifscciymvrnxjqexdhvkeuwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957904.369393-152-248137144223922/AnsiballZ_stat.py'
Feb 01 14:58:24 compute-0 sudo[136435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:24 compute-0 podman[136451]: 2026-02-01 14:58:24.732170668 +0000 UTC m=+0.041165148 container create ea54aef75138fe7c7d9818dd559fcc9dbc0cd1815f72f20c1f93a098fa03a895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 01 14:58:24 compute-0 systemd[1]: Started libpod-conmon-ea54aef75138fe7c7d9818dd559fcc9dbc0cd1815f72f20c1f93a098fa03a895.scope.
Feb 01 14:58:24 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:58:24 compute-0 podman[136451]: 2026-02-01 14:58:24.794324194 +0000 UTC m=+0.103318674 container init ea54aef75138fe7c7d9818dd559fcc9dbc0cd1815f72f20c1f93a098fa03a895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_turing, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 01 14:58:24 compute-0 podman[136451]: 2026-02-01 14:58:24.800142927 +0000 UTC m=+0.109137457 container start ea54aef75138fe7c7d9818dd559fcc9dbc0cd1815f72f20c1f93a098fa03a895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_turing, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 01 14:58:24 compute-0 podman[136451]: 2026-02-01 14:58:24.80415021 +0000 UTC m=+0.113144730 container attach ea54aef75138fe7c7d9818dd559fcc9dbc0cd1815f72f20c1f93a098fa03a895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_turing, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:58:24 compute-0 determined_turing[136468]: 167 167
Feb 01 14:58:24 compute-0 systemd[1]: libpod-ea54aef75138fe7c7d9818dd559fcc9dbc0cd1815f72f20c1f93a098fa03a895.scope: Deactivated successfully.
Feb 01 14:58:24 compute-0 podman[136451]: 2026-02-01 14:58:24.805459647 +0000 UTC m=+0.114454137 container died ea54aef75138fe7c7d9818dd559fcc9dbc0cd1815f72f20c1f93a098fa03a895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:58:24 compute-0 podman[136451]: 2026-02-01 14:58:24.718348549 +0000 UTC m=+0.027343049 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:58:24 compute-0 python3.9[136437]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:58:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-121a7840aa8c89d66f92a18da3686e91979e3f31be00cd0723388d600741e889-merged.mount: Deactivated successfully.
Feb 01 14:58:24 compute-0 podman[136451]: 2026-02-01 14:58:24.853416014 +0000 UTC m=+0.162410504 container remove ea54aef75138fe7c7d9818dd559fcc9dbc0cd1815f72f20c1f93a098fa03a895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:58:24 compute-0 sudo[136435]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:24 compute-0 systemd[1]: libpod-conmon-ea54aef75138fe7c7d9818dd559fcc9dbc0cd1815f72f20c1f93a098fa03a895.scope: Deactivated successfully.
Feb 01 14:58:24 compute-0 ceph-mon[75179]: pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:24 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:58:24 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:58:24 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:58:24 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:58:24 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:58:24 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:58:24 compute-0 podman[136520]: 2026-02-01 14:58:24.989646571 +0000 UTC m=+0.044704657 container create 63d3eabae0ba48e9c1a3ded36011a12829ab3bb8d098076b5a30d012fdc337b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_ardinghelli, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 01 14:58:25 compute-0 systemd[1]: Started libpod-conmon-63d3eabae0ba48e9c1a3ded36011a12829ab3bb8d098076b5a30d012fdc337b8.scope.
Feb 01 14:58:25 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e54a66320cd8fda393a6f3db173d235d822e236160321e36e17f885751deb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e54a66320cd8fda393a6f3db173d235d822e236160321e36e17f885751deb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e54a66320cd8fda393a6f3db173d235d822e236160321e36e17f885751deb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e54a66320cd8fda393a6f3db173d235d822e236160321e36e17f885751deb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e54a66320cd8fda393a6f3db173d235d822e236160321e36e17f885751deb6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:58:25 compute-0 podman[136520]: 2026-02-01 14:58:24.972093728 +0000 UTC m=+0.027151834 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:58:25 compute-0 podman[136520]: 2026-02-01 14:58:25.076771359 +0000 UTC m=+0.131829505 container init 63d3eabae0ba48e9c1a3ded36011a12829ab3bb8d098076b5a30d012fdc337b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:58:25 compute-0 podman[136520]: 2026-02-01 14:58:25.081253505 +0000 UTC m=+0.136311601 container start 63d3eabae0ba48e9c1a3ded36011a12829ab3bb8d098076b5a30d012fdc337b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_ardinghelli, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 01 14:58:25 compute-0 podman[136520]: 2026-02-01 14:58:25.084508786 +0000 UTC m=+0.139566892 container attach 63d3eabae0ba48e9c1a3ded36011a12829ab3bb8d098076b5a30d012fdc337b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_ardinghelli, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Feb 01 14:58:25 compute-0 sudo[136638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxvbyrnagzbmfbkrkcjlfladqkgkvpny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957904.369393-152-248137144223922/AnsiballZ_copy.py'
Feb 01 14:58:25 compute-0 sudo[136638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:58:25 compute-0 python3.9[136640]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957904.369393-152-248137144223922/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:58:25 compute-0 sudo[136638]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:25 compute-0 sleepy_ardinghelli[136558]: --> passed data devices: 0 physical, 3 LVM
Feb 01 14:58:25 compute-0 sleepy_ardinghelli[136558]: --> All data devices are unavailable
Feb 01 14:58:25 compute-0 systemd[1]: libpod-63d3eabae0ba48e9c1a3ded36011a12829ab3bb8d098076b5a30d012fdc337b8.scope: Deactivated successfully.
Feb 01 14:58:25 compute-0 podman[136520]: 2026-02-01 14:58:25.620212677 +0000 UTC m=+0.675270803 container died 63d3eabae0ba48e9c1a3ded36011a12829ab3bb8d098076b5a30d012fdc337b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 01 14:58:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-97e54a66320cd8fda393a6f3db173d235d822e236160321e36e17f885751deb6-merged.mount: Deactivated successfully.
Feb 01 14:58:25 compute-0 podman[136520]: 2026-02-01 14:58:25.670469468 +0000 UTC m=+0.725527584 container remove 63d3eabae0ba48e9c1a3ded36011a12829ab3bb8d098076b5a30d012fdc337b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 01 14:58:25 compute-0 systemd[1]: libpod-conmon-63d3eabae0ba48e9c1a3ded36011a12829ab3bb8d098076b5a30d012fdc337b8.scope: Deactivated successfully.
Feb 01 14:58:25 compute-0 sudo[136316]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:25 compute-0 sudo[136714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:58:25 compute-0 sudo[136714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:58:25 compute-0 sudo[136714]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:25 compute-0 sudo[136768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 14:58:25 compute-0 sudo[136768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:58:25 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:26 compute-0 sudo[136868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzhkdswvcgnmqsmvzqfrixckqobubmzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957905.715483-167-38190679384665/AnsiballZ_stat.py'
Feb 01 14:58:26 compute-0 sudo[136868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:26 compute-0 podman[136881]: 2026-02-01 14:58:26.102128006 +0000 UTC m=+0.056421897 container create 784d74a350923ff52e927f4ace78478b5a31c4ce0b1f481cacf9a5288565edd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_haslett, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 01 14:58:26 compute-0 systemd[1]: Started libpod-conmon-784d74a350923ff52e927f4ace78478b5a31c4ce0b1f481cacf9a5288565edd0.scope.
Feb 01 14:58:26 compute-0 podman[136881]: 2026-02-01 14:58:26.070921109 +0000 UTC m=+0.025215050 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:58:26 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:58:26 compute-0 podman[136881]: 2026-02-01 14:58:26.195032356 +0000 UTC m=+0.149326257 container init 784d74a350923ff52e927f4ace78478b5a31c4ce0b1f481cacf9a5288565edd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_haslett, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:58:26 compute-0 podman[136881]: 2026-02-01 14:58:26.203082192 +0000 UTC m=+0.157376083 container start 784d74a350923ff52e927f4ace78478b5a31c4ce0b1f481cacf9a5288565edd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 01 14:58:26 compute-0 podman[136881]: 2026-02-01 14:58:26.207028753 +0000 UTC m=+0.161322624 container attach 784d74a350923ff52e927f4ace78478b5a31c4ce0b1f481cacf9a5288565edd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_haslett, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 01 14:58:26 compute-0 nervous_haslett[136898]: 167 167
Feb 01 14:58:26 compute-0 systemd[1]: libpod-784d74a350923ff52e927f4ace78478b5a31c4ce0b1f481cacf9a5288565edd0.scope: Deactivated successfully.
Feb 01 14:58:26 compute-0 conmon[136898]: conmon 784d74a350923ff52e92 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-784d74a350923ff52e927f4ace78478b5a31c4ce0b1f481cacf9a5288565edd0.scope/container/memory.events
Feb 01 14:58:26 compute-0 podman[136881]: 2026-02-01 14:58:26.21228105 +0000 UTC m=+0.166574911 container died 784d74a350923ff52e927f4ace78478b5a31c4ce0b1f481cacf9a5288565edd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_haslett, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Feb 01 14:58:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-268be11ff5e90f2ac3592089f7b8fc61340a2ed2fb34c10bc548079565c18ed1-merged.mount: Deactivated successfully.
Feb 01 14:58:26 compute-0 podman[136881]: 2026-02-01 14:58:26.24857989 +0000 UTC m=+0.202873751 container remove 784d74a350923ff52e927f4ace78478b5a31c4ce0b1f481cacf9a5288565edd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Feb 01 14:58:26 compute-0 python3.9[136879]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:58:26 compute-0 systemd[1]: libpod-conmon-784d74a350923ff52e927f4ace78478b5a31c4ce0b1f481cacf9a5288565edd0.scope: Deactivated successfully.
Feb 01 14:58:26 compute-0 sudo[136868]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:26 compute-0 podman[136924]: 2026-02-01 14:58:26.374194319 +0000 UTC m=+0.048435921 container create b4e0226ecbb4b324515c2d24e53d20f1d9645424318dfcfdf7eecca1f65e9ec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:58:26 compute-0 systemd[1]: Started libpod-conmon-b4e0226ecbb4b324515c2d24e53d20f1d9645424318dfcfdf7eecca1f65e9ec2.scope.
Feb 01 14:58:26 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:58:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286e64572da4637cab9ebe1425bfca924362c4f68d396c22390e5abbdda984f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:58:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286e64572da4637cab9ebe1425bfca924362c4f68d396c22390e5abbdda984f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:58:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286e64572da4637cab9ebe1425bfca924362c4f68d396c22390e5abbdda984f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:58:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286e64572da4637cab9ebe1425bfca924362c4f68d396c22390e5abbdda984f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:58:26 compute-0 podman[136924]: 2026-02-01 14:58:26.353443066 +0000 UTC m=+0.027684678 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:58:26 compute-0 podman[136924]: 2026-02-01 14:58:26.463148138 +0000 UTC m=+0.137389730 container init b4e0226ecbb4b324515c2d24e53d20f1d9645424318dfcfdf7eecca1f65e9ec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_borg, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:58:26 compute-0 podman[136924]: 2026-02-01 14:58:26.471812392 +0000 UTC m=+0.146053964 container start b4e0226ecbb4b324515c2d24e53d20f1d9645424318dfcfdf7eecca1f65e9ec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_borg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 01 14:58:26 compute-0 podman[136924]: 2026-02-01 14:58:26.47531055 +0000 UTC m=+0.149552142 container attach b4e0226ecbb4b324515c2d24e53d20f1d9645424318dfcfdf7eecca1f65e9ec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:58:26 compute-0 sudo[137065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnrwmdgicrgexptitbwddumnkavjgsoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957905.715483-167-38190679384665/AnsiballZ_copy.py'
Feb 01 14:58:26 compute-0 sudo[137065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:26 compute-0 heuristic_borg[136987]: {
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:     "0": [
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:         {
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "devices": [
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "/dev/loop3"
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             ],
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "lv_name": "ceph_lv0",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "lv_size": "21470642176",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "name": "ceph_lv0",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "tags": {
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.cluster_name": "ceph",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.crush_device_class": "",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.encrypted": "0",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.objectstore": "bluestore",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.osd_id": "0",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.type": "block",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.vdo": "0",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.with_tpm": "0"
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             },
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "type": "block",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "vg_name": "ceph_vg0"
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:         }
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:     ],
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:     "1": [
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:         {
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "devices": [
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "/dev/loop4"
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             ],
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "lv_name": "ceph_lv1",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "lv_size": "21470642176",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "name": "ceph_lv1",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "tags": {
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.cluster_name": "ceph",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.crush_device_class": "",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.encrypted": "0",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.objectstore": "bluestore",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.osd_id": "1",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.type": "block",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.vdo": "0",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.with_tpm": "0"
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             },
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "type": "block",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "vg_name": "ceph_vg1"
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:         }
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:     ],
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:     "2": [
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:         {
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "devices": [
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "/dev/loop5"
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             ],
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "lv_name": "ceph_lv2",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "lv_size": "21470642176",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "name": "ceph_lv2",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "tags": {
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.cluster_name": "ceph",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.crush_device_class": "",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.encrypted": "0",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.objectstore": "bluestore",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.osd_id": "2",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.type": "block",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.vdo": "0",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:                 "ceph.with_tpm": "0"
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             },
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "type": "block",
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:             "vg_name": "ceph_vg2"
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:         }
Feb 01 14:58:26 compute-0 heuristic_borg[136987]:     ]
Feb 01 14:58:26 compute-0 heuristic_borg[136987]: }
Feb 01 14:58:26 compute-0 systemd[1]: libpod-b4e0226ecbb4b324515c2d24e53d20f1d9645424318dfcfdf7eecca1f65e9ec2.scope: Deactivated successfully.
Feb 01 14:58:26 compute-0 podman[136924]: 2026-02-01 14:58:26.719745938 +0000 UTC m=+0.393987520 container died b4e0226ecbb4b324515c2d24e53d20f1d9645424318dfcfdf7eecca1f65e9ec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_borg, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 01 14:58:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-286e64572da4637cab9ebe1425bfca924362c4f68d396c22390e5abbdda984f2-merged.mount: Deactivated successfully.
Feb 01 14:58:26 compute-0 podman[136924]: 2026-02-01 14:58:26.764556096 +0000 UTC m=+0.438797698 container remove b4e0226ecbb4b324515c2d24e53d20f1d9645424318dfcfdf7eecca1f65e9ec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_borg, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:58:26 compute-0 systemd[1]: libpod-conmon-b4e0226ecbb4b324515c2d24e53d20f1d9645424318dfcfdf7eecca1f65e9ec2.scope: Deactivated successfully.
Feb 01 14:58:26 compute-0 sudo[136768]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:26 compute-0 python3.9[137069]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957905.715483-167-38190679384665/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:58:26 compute-0 sudo[137084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:58:26 compute-0 sudo[137084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:58:26 compute-0 sudo[137084]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:26 compute-0 sudo[137065]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:26 compute-0 sudo[137109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 14:58:26 compute-0 sudo[137109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:58:26 compute-0 ceph-mon[75179]: pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:27 compute-0 podman[137222]: 2026-02-01 14:58:27.105212376 +0000 UTC m=+0.029660644 container create 502627caa53a3218e51de76fa1dd89456b5646a48d55628af948411010c82dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:58:27 compute-0 systemd[1]: Started libpod-conmon-502627caa53a3218e51de76fa1dd89456b5646a48d55628af948411010c82dc9.scope.
Feb 01 14:58:27 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:58:27 compute-0 podman[137222]: 2026-02-01 14:58:27.167094935 +0000 UTC m=+0.091543253 container init 502627caa53a3218e51de76fa1dd89456b5646a48d55628af948411010c82dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_bardeen, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 01 14:58:27 compute-0 podman[137222]: 2026-02-01 14:58:27.172024823 +0000 UTC m=+0.096473101 container start 502627caa53a3218e51de76fa1dd89456b5646a48d55628af948411010c82dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_bardeen, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:58:27 compute-0 wizardly_bardeen[137261]: 167 167
Feb 01 14:58:27 compute-0 systemd[1]: libpod-502627caa53a3218e51de76fa1dd89456b5646a48d55628af948411010c82dc9.scope: Deactivated successfully.
Feb 01 14:58:27 compute-0 podman[137222]: 2026-02-01 14:58:27.178117304 +0000 UTC m=+0.102565622 container attach 502627caa53a3218e51de76fa1dd89456b5646a48d55628af948411010c82dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:58:27 compute-0 podman[137222]: 2026-02-01 14:58:27.178415223 +0000 UTC m=+0.102863511 container died 502627caa53a3218e51de76fa1dd89456b5646a48d55628af948411010c82dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_bardeen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Feb 01 14:58:27 compute-0 podman[137222]: 2026-02-01 14:58:27.092003255 +0000 UTC m=+0.016451553 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:58:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e05fe6d3c3149f3a42499f462b57e551bbd6b85f443434e624ee93bfccd3103d-merged.mount: Deactivated successfully.
Feb 01 14:58:27 compute-0 podman[137222]: 2026-02-01 14:58:27.212678245 +0000 UTC m=+0.137126513 container remove 502627caa53a3218e51de76fa1dd89456b5646a48d55628af948411010c82dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_bardeen, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 01 14:58:27 compute-0 systemd[1]: libpod-conmon-502627caa53a3218e51de76fa1dd89456b5646a48d55628af948411010c82dc9.scope: Deactivated successfully.
Feb 01 14:58:27 compute-0 sudo[137330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvsxhfioxwfcitcvjrcsoizgcpsszfho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957906.9877667-182-62597301555505/AnsiballZ_stat.py'
Feb 01 14:58:27 compute-0 sudo[137330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:27 compute-0 podman[137338]: 2026-02-01 14:58:27.328371716 +0000 UTC m=+0.033321087 container create ac348ac911d705df40d69bb3cdc367e5145ce8cd701c01056db2989b998aa51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_faraday, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:58:27 compute-0 systemd[1]: Started libpod-conmon-ac348ac911d705df40d69bb3cdc367e5145ce8cd701c01056db2989b998aa51d.scope.
Feb 01 14:58:27 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:58:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a589328c595e4967684b40f30ccaef3f9f85245b16a15beb1f6d6c92e02755/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:58:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a589328c595e4967684b40f30ccaef3f9f85245b16a15beb1f6d6c92e02755/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:58:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a589328c595e4967684b40f30ccaef3f9f85245b16a15beb1f6d6c92e02755/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:58:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a589328c595e4967684b40f30ccaef3f9f85245b16a15beb1f6d6c92e02755/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:58:27 compute-0 podman[137338]: 2026-02-01 14:58:27.402171829 +0000 UTC m=+0.107121260 container init ac348ac911d705df40d69bb3cdc367e5145ce8cd701c01056db2989b998aa51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_faraday, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:58:27 compute-0 podman[137338]: 2026-02-01 14:58:27.312674355 +0000 UTC m=+0.017623746 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:58:27 compute-0 podman[137338]: 2026-02-01 14:58:27.410108852 +0000 UTC m=+0.115058233 container start ac348ac911d705df40d69bb3cdc367e5145ce8cd701c01056db2989b998aa51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_faraday, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:58:27 compute-0 podman[137338]: 2026-02-01 14:58:27.413120747 +0000 UTC m=+0.118070168 container attach ac348ac911d705df40d69bb3cdc367e5145ce8cd701c01056db2989b998aa51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 01 14:58:27 compute-0 python3.9[137332]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:58:27 compute-0 sudo[137330]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:27 compute-0 sudo[137510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkamgvwgpsnawiqlrxnfzldbzyrgchvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957906.9877667-182-62597301555505/AnsiballZ_copy.py'
Feb 01 14:58:27 compute-0 sudo[137510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:27 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:27 compute-0 python3.9[137518]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957906.9877667-182-62597301555505/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:58:27 compute-0 sudo[137510]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:27 compute-0 lvm[137557]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 14:58:27 compute-0 lvm[137557]: VG ceph_vg0 finished
Feb 01 14:58:27 compute-0 lvm[137559]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 14:58:27 compute-0 lvm[137559]: VG ceph_vg1 finished
Feb 01 14:58:27 compute-0 lvm[137562]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 14:58:27 compute-0 lvm[137562]: VG ceph_vg2 finished
Feb 01 14:58:27 compute-0 lvm[137586]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 14:58:27 compute-0 lvm[137586]: VG ceph_vg0 finished
Feb 01 14:58:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 14:58:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:58:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 14:58:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:58:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:58:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:58:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:58:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:58:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:58:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:58:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:58:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:58:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb 01 14:58:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:58:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:58:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:58:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 14:58:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:58:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 14:58:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:58:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:58:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:58:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 14:58:28 compute-0 gifted_faraday[137355]: {}
Feb 01 14:58:28 compute-0 systemd[1]: libpod-ac348ac911d705df40d69bb3cdc367e5145ce8cd701c01056db2989b998aa51d.scope: Deactivated successfully.
Feb 01 14:58:28 compute-0 podman[137338]: 2026-02-01 14:58:28.079022055 +0000 UTC m=+0.783971436 container died ac348ac911d705df40d69bb3cdc367e5145ce8cd701c01056db2989b998aa51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_faraday, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:58:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7a589328c595e4967684b40f30ccaef3f9f85245b16a15beb1f6d6c92e02755-merged.mount: Deactivated successfully.
Feb 01 14:58:28 compute-0 podman[137338]: 2026-02-01 14:58:28.113288258 +0000 UTC m=+0.818237629 container remove ac348ac911d705df40d69bb3cdc367e5145ce8cd701c01056db2989b998aa51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_faraday, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 01 14:58:28 compute-0 systemd[1]: libpod-conmon-ac348ac911d705df40d69bb3cdc367e5145ce8cd701c01056db2989b998aa51d.scope: Deactivated successfully.
Feb 01 14:58:28 compute-0 sudo[137109]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:58:28 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:58:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:58:28 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:58:28 compute-0 sudo[137651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 14:58:28 compute-0 sudo[137651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:58:28 compute-0 sudo[137651]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:28 compute-0 sudo[137749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzfszgrgbqdnwqzblcbrfkjbnhidmpfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957908.0846329-197-64754111571843/AnsiballZ_stat.py'
Feb 01 14:58:28 compute-0 sudo[137749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:28 compute-0 python3.9[137751]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:58:28 compute-0 sudo[137749]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:28 compute-0 sudo[137874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gksntrkfecijxkascvunvwlkhkwesirw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957908.0846329-197-64754111571843/AnsiballZ_copy.py'
Feb 01 14:58:28 compute-0 sudo[137874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:28 compute-0 python3.9[137876]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957908.0846329-197-64754111571843/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:58:28 compute-0 sudo[137874]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:29 compute-0 ceph-mon[75179]: pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:29 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:58:29 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:58:29 compute-0 sudo[138026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcwtffmvqmpmkjtcdrzickbczrzvmjgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957909.1708248-212-183959532752015/AnsiballZ_stat.py'
Feb 01 14:58:29 compute-0 sudo[138026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:29 compute-0 python3.9[138028]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:58:29 compute-0 sudo[138026]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:29 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:30 compute-0 sudo[138151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfrqemahcoqqlbcxiikrkoksxplynejz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957909.1708248-212-183959532752015/AnsiballZ_copy.py'
Feb 01 14:58:30 compute-0 sudo[138151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:30 compute-0 python3.9[138153]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769957909.1708248-212-183959532752015/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:58:30 compute-0 sudo[138151]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:58:30 compute-0 sudo[138303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noaiobcxnfofygikviuaoowkhcgjqsdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957910.3835893-227-117289520892538/AnsiballZ_file.py'
Feb 01 14:58:30 compute-0 sudo[138303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:30 compute-0 python3.9[138305]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:58:30 compute-0 sudo[138303]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:31 compute-0 sudo[138455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgqjjkonxdiweadsjrsanqfschskngzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957910.9245179-235-57349542011966/AnsiballZ_command.py'
Feb 01 14:58:31 compute-0 sudo[138455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:31 compute-0 ceph-mon[75179]: pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:31 compute-0 python3.9[138457]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:58:31 compute-0 sudo[138455]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:31 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:31 compute-0 sudo[138610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnwcqdecpadtjzxrhwsxxinjbeigflmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957911.58658-243-229127310919457/AnsiballZ_blockinfile.py'
Feb 01 14:58:31 compute-0 sudo[138610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:32 compute-0 python3.9[138612]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:58:32 compute-0 sudo[138610]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:32 compute-0 sudo[138762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpasqwoylvgbxamcxjwylvzijgztipnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957912.3373249-252-113123484777124/AnsiballZ_command.py'
Feb 01 14:58:32 compute-0 sudo[138762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:32 compute-0 python3.9[138764]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:58:32 compute-0 sudo[138762]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:33 compute-0 ceph-mon[75179]: pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:33 compute-0 sudo[138915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylvuxxqufufakamooojjoziztxkyroca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957913.0012877-260-60570129535915/AnsiballZ_stat.py'
Feb 01 14:58:33 compute-0 sudo[138915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:33 compute-0 python3.9[138917]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:58:33 compute-0 sudo[138915]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:33 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:33 compute-0 sudo[139069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuoilodmaxiwmmcqnfmgetxqtcoexwdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957913.7023544-268-137189487765396/AnsiballZ_command.py'
Feb 01 14:58:33 compute-0 sudo[139069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:34 compute-0 python3.9[139071]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:58:34 compute-0 sudo[139069]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:34 compute-0 sudo[139224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eglpvlpoyjyqwzdkaywythbajgurbavn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957914.397975-276-125937625337663/AnsiballZ_file.py'
Feb 01 14:58:34 compute-0 sudo[139224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:34 compute-0 python3.9[139226]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:58:34 compute-0 sudo[139224]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:35 compute-0 ceph-mon[75179]: pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:58:35 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:36 compute-0 python3.9[139376]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:58:36 compute-0 ceph-mon[75179]: pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:36 compute-0 sudo[139527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsyxzswymhdrxkiyzdtmjxcvqzwayrjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957916.701426-316-129279635372634/AnsiballZ_command.py'
Feb 01 14:58:36 compute-0 sudo[139527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:37 compute-0 python3.9[139529]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:9e:41:65:cf" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:58:37 compute-0 ovs-vsctl[139530]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:9e:41:65:cf external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Feb 01 14:58:37 compute-0 sudo[139527]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:37 compute-0 sudo[139680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spwjpwcqgfhqmakxnbhbljbymdycfsfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957917.4448495-325-276323700148507/AnsiballZ_command.py'
Feb 01 14:58:37 compute-0 sudo[139680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:37 compute-0 python3.9[139682]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:58:37 compute-0 sudo[139680]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:37 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:38 compute-0 sudo[139835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrkdlzwswypsxikpracnldfpxsacinio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957917.9702492-333-61890692369534/AnsiballZ_command.py'
Feb 01 14:58:38 compute-0 sudo[139835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:38 compute-0 python3.9[139837]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:58:38 compute-0 ovs-vsctl[139838]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Feb 01 14:58:38 compute-0 sudo[139835]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:38 compute-0 ceph-mon[75179]: pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:39 compute-0 python3.9[139988]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:58:39 compute-0 sudo[140140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfgocoltbpmomisxglwbkavzxntmktlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957919.3040102-350-202399621565283/AnsiballZ_file.py'
Feb 01 14:58:39 compute-0 sudo[140140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:39 compute-0 python3.9[140142]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:58:39 compute-0 sudo[140140]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:39 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:40 compute-0 sudo[140292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhmnlmvgehdzpfoipmhswsfrsnabdsvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957919.9482949-358-222573998528338/AnsiballZ_stat.py'
Feb 01 14:58:40 compute-0 sudo[140292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:40 compute-0 python3.9[140294]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:58:40 compute-0 sudo[140292]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:58:40 compute-0 sudo[140370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cikungzbvuhegcenailrurwnaqgoodvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957919.9482949-358-222573998528338/AnsiballZ_file.py'
Feb 01 14:58:40 compute-0 sudo[140370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:40 compute-0 python3.9[140372]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:58:40 compute-0 sudo[140370]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:40 compute-0 ceph-mon[75179]: pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:41 compute-0 sudo[140522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzwxiszmyixslkseaulmoazwghccoyrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957921.018339-358-38683460315185/AnsiballZ_stat.py'
Feb 01 14:58:41 compute-0 sudo[140522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:41 compute-0 python3.9[140524]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:58:41 compute-0 sudo[140522]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:41 compute-0 sudo[140600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rovekrqyyimdbspkzawlondgvlnscepg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957921.018339-358-38683460315185/AnsiballZ_file.py'
Feb 01 14:58:41 compute-0 sudo[140600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:41 compute-0 python3.9[140602]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:58:41 compute-0 sudo[140600]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:41 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:42 compute-0 sudo[140752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ontrcubeayryounreijkkxeehdbkoixf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957921.971299-381-190269470086366/AnsiballZ_file.py'
Feb 01 14:58:42 compute-0 sudo[140752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:42 compute-0 python3.9[140754]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:58:42 compute-0 sudo[140752]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:42 compute-0 sudo[140904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymrotrxcnwoixnlbjsuemtmgdoznyydk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957922.5441155-389-103741048765453/AnsiballZ_stat.py'
Feb 01 14:58:42 compute-0 sudo[140904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:42 compute-0 ceph-mon[75179]: pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:42 compute-0 python3.9[140906]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:58:43 compute-0 sudo[140904]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:43 compute-0 sudo[140982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hclyzruikbxjdfvvxbizuzxukpyvncdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957922.5441155-389-103741048765453/AnsiballZ_file.py'
Feb 01 14:58:43 compute-0 sudo[140982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:43 compute-0 python3.9[140984]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:58:43 compute-0 sudo[140982]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:43 compute-0 sudo[141134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mntyhxgnsybwwmenrijupnxxaouaedhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957923.5344622-401-95551321623022/AnsiballZ_stat.py'
Feb 01 14:58:43 compute-0 sudo[141134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:43 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:43 compute-0 python3.9[141136]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:58:44 compute-0 sudo[141134]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:44 compute-0 sudo[141212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naerawwsdvfoafqzxdamfeycmlixpolq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957923.5344622-401-95551321623022/AnsiballZ_file.py'
Feb 01 14:58:44 compute-0 sudo[141212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:44 compute-0 python3.9[141214]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:58:44 compute-0 sudo[141212]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:44 compute-0 sudo[141364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiltgtbidvhwslixkzwylmzaryjriump ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957924.5151708-413-177724054457117/AnsiballZ_systemd.py'
Feb 01 14:58:44 compute-0 sudo[141364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:44 compute-0 ceph-mon[75179]: pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:45 compute-0 python3.9[141366]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 14:58:45 compute-0 systemd[1]: Reloading.
Feb 01 14:58:45 compute-0 sshd-session[141367]: Connection closed by 80.94.92.171 port 46050
Feb 01 14:58:45 compute-0 systemd-sysv-generator[141400]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:58:45 compute-0 systemd-rc-local-generator[141397]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:58:45 compute-0 sudo[141364]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:58:45 compute-0 sudo[141556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkwjwossadlbvzgnlanfnjloonpyledt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957925.4623659-421-253160960572414/AnsiballZ_stat.py'
Feb 01 14:58:45 compute-0 sudo[141556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:45 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:45 compute-0 python3.9[141558]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:58:45 compute-0 sudo[141556]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:46 compute-0 sudo[141634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rarlqnfbkpcvalxkxmsoqpupvnntzuky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957925.4623659-421-253160960572414/AnsiballZ_file.py'
Feb 01 14:58:46 compute-0 sudo[141634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:46 compute-0 python3.9[141636]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:58:46 compute-0 sudo[141634]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:46 compute-0 sudo[141786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isjumvcmdlpasxgpixhykkgogebozsmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957926.4661834-433-8459339947248/AnsiballZ_stat.py'
Feb 01 14:58:46 compute-0 sudo[141786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:46 compute-0 python3.9[141788]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:58:46 compute-0 sudo[141786]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:46 compute-0 ceph-mon[75179]: pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:47 compute-0 sudo[141864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqbxdnxrjtinjijasxabegmolfcgraln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957926.4661834-433-8459339947248/AnsiballZ_file.py'
Feb 01 14:58:47 compute-0 sudo[141864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:47 compute-0 python3.9[141866]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:58:47 compute-0 sudo[141864]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:47 compute-0 sudo[142016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqjgbtrnanxgzdpsgbjzgngpforyehdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957927.5022702-445-76093154693877/AnsiballZ_systemd.py'
Feb 01 14:58:47 compute-0 sudo[142016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:47 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:48 compute-0 python3.9[142018]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 14:58:48 compute-0 systemd[1]: Reloading.
Feb 01 14:58:48 compute-0 systemd-rc-local-generator[142040]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:58:48 compute-0 systemd-sysv-generator[142043]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:58:48 compute-0 systemd[1]: Starting Create netns directory...
Feb 01 14:58:48 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 01 14:58:48 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 01 14:58:48 compute-0 systemd[1]: Finished Create netns directory.
Feb 01 14:58:48 compute-0 sudo[142016]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:58:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:58:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:58:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:58:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:58:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:58:48 compute-0 sudo[142209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjwktkmdhlcgbooppjdkersyonyzrbkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957928.5516374-455-118681839051587/AnsiballZ_file.py'
Feb 01 14:58:48 compute-0 sudo[142209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:48 compute-0 ceph-mon[75179]: pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:49 compute-0 python3.9[142211]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:58:49 compute-0 sudo[142209]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:49 compute-0 sudo[142361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykgkpjujxxzwoccktsuolvlbagigfugv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957929.1750705-463-132301564522633/AnsiballZ_stat.py'
Feb 01 14:58:49 compute-0 sudo[142361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:49 compute-0 python3.9[142363]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:58:49 compute-0 sudo[142361]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:49 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:49 compute-0 sudo[142484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuaagygpegnmtmjnweuusdizpshdqwpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957929.1750705-463-132301564522633/AnsiballZ_copy.py'
Feb 01 14:58:49 compute-0 sudo[142484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:50 compute-0 python3.9[142486]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957929.1750705-463-132301564522633/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:58:50 compute-0 sudo[142484]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:58:50 compute-0 sudo[142636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-labfwbgtpzstmwaravspcxgszhxtsqtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957930.5113237-480-38862909849756/AnsiballZ_file.py'
Feb 01 14:58:50 compute-0 sudo[142636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:50 compute-0 python3.9[142638]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:58:50 compute-0 sudo[142636]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:51 compute-0 ceph-mon[75179]: pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:51 compute-0 sudo[142788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atwhhgjurxiksewannoplctongjcogbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957931.0607722-488-6910794958964/AnsiballZ_file.py'
Feb 01 14:58:51 compute-0 sudo[142788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:51 compute-0 python3.9[142790]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:58:51 compute-0 sudo[142788]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:51 compute-0 sudo[142940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmwavvwyshrcpcmyxdeoaskpwmmqhvgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957931.6012402-496-34135964142153/AnsiballZ_stat.py'
Feb 01 14:58:51 compute-0 sudo[142940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:51 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:51 compute-0 python3.9[142942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:58:51 compute-0 sudo[142940]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:52 compute-0 sudo[143063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnspjanbcseydeqeybpcjhbbrflsbhjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957931.6012402-496-34135964142153/AnsiballZ_copy.py'
Feb 01 14:58:52 compute-0 sudo[143063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:52 compute-0 python3.9[143065]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957931.6012402-496-34135964142153/.source.json _original_basename=.t_4rgi8d follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:58:52 compute-0 sudo[143063]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:53 compute-0 ceph-mon[75179]: pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:53 compute-0 python3.9[143215]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:58:53 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:54 compute-0 sudo[143636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dftrwcykbyoxuvfnmacrwjnqyinshubp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957934.3921576-536-186981883785725/AnsiballZ_container_config_data.py'
Feb 01 14:58:54 compute-0 sudo[143636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:54 compute-0 python3.9[143638]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Feb 01 14:58:54 compute-0 sudo[143636]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:55 compute-0 ceph-mon[75179]: pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:58:55 compute-0 sudo[143788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppjrohvtalmiivzkeuytmpmfjzbibkmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957935.232135-547-85824658838213/AnsiballZ_container_config_hash.py'
Feb 01 14:58:55 compute-0 sudo[143788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:55 compute-0 python3.9[143790]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 01 14:58:55 compute-0 sudo[143788]: pam_unix(sudo:session): session closed for user root
Feb 01 14:58:55 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:56 compute-0 sudo[143940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oszrxzfqkoposeimanmrxwumcxghsypd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769957936.0752919-557-225832285234533/AnsiballZ_edpm_container_manage.py'
Feb 01 14:58:56 compute-0 sudo[143940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:58:56 compute-0 python3[143942]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Feb 01 14:58:57 compute-0 ceph-mon[75179]: pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:57 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:59 compute-0 ceph-mon[75179]: pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:58:59 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:59:01 compute-0 ceph-mon[75179]: pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:01 compute-0 podman[143957]: 2026-02-01 14:59:01.178306243 +0000 UTC m=+4.346613596 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 01 14:59:01 compute-0 podman[144076]: 2026-02-01 14:59:01.276106811 +0000 UTC m=+0.040461838 container create f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 01 14:59:01 compute-0 podman[144076]: 2026-02-01 14:59:01.252256871 +0000 UTC m=+0.016611948 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 01 14:59:01 compute-0 python3[143942]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 01 14:59:01 compute-0 sudo[143940]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:01 compute-0 sudo[144264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wygbahvaijowxvblcnazzmdvgfhgwnwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957941.5930927-565-58915029418908/AnsiballZ_stat.py'
Feb 01 14:59:01 compute-0 sudo[144264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:01 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:02 compute-0 python3.9[144266]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:59:02 compute-0 sudo[144264]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:02 compute-0 sudo[144418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nylbshwffkatzzuishlrlkcqonimkorz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957942.3084712-574-262001887864894/AnsiballZ_file.py'
Feb 01 14:59:02 compute-0 sudo[144418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:02 compute-0 python3.9[144420]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:59:02 compute-0 sudo[144418]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:02 compute-0 sudo[144494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfusjpwqqeaztkhvwyuuezsvohulyklu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957942.3084712-574-262001887864894/AnsiballZ_stat.py'
Feb 01 14:59:02 compute-0 sudo[144494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:03 compute-0 ceph-mon[75179]: pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:03 compute-0 python3.9[144496]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:59:03 compute-0 sudo[144494]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:03 compute-0 sudo[144645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhltcgwiyrjtfllenmujvipuiukbntwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957943.2359455-574-135271856173957/AnsiballZ_copy.py'
Feb 01 14:59:03 compute-0 sudo[144645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:03 compute-0 python3.9[144647]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769957943.2359455-574-135271856173957/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:59:03 compute-0 sudo[144645]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:03 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:04 compute-0 sudo[144721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfhlpjonerrvmsebfsbfgaweyqhymbtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957943.2359455-574-135271856173957/AnsiballZ_systemd.py'
Feb 01 14:59:04 compute-0 sudo[144721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:04 compute-0 python3.9[144723]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 01 14:59:04 compute-0 systemd[1]: Reloading.
Feb 01 14:59:04 compute-0 systemd-rc-local-generator[144751]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:59:04 compute-0 systemd-sysv-generator[144754]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:59:04 compute-0 sudo[144721]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:04 compute-0 sudo[144832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvapmgufyaoesksnvqzwrottqnkcpjae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957943.2359455-574-135271856173957/AnsiballZ_systemd.py'
Feb 01 14:59:04 compute-0 sudo[144832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:05 compute-0 ceph-mon[75179]: pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:05 compute-0 python3.9[144834]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 14:59:05 compute-0 systemd[1]: Reloading.
Feb 01 14:59:05 compute-0 systemd-rc-local-generator[144861]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:59:05 compute-0 systemd-sysv-generator[144865]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:59:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:59:05 compute-0 systemd[1]: Starting ovn_controller container...
Feb 01 14:59:05 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:59:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3449622fdff9fe3522a8bb617d602fcbc9463347f45dd946280974b2873978c8/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Feb 01 14:59:05 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16.
Feb 01 14:59:05 compute-0 podman[144874]: 2026-02-01 14:59:05.639456417 +0000 UTC m=+0.092387477 container init f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 01 14:59:05 compute-0 ovn_controller[144890]: + sudo -E kolla_set_configs
Feb 01 14:59:05 compute-0 podman[144874]: 2026-02-01 14:59:05.665006505 +0000 UTC m=+0.117937595 container start f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 01 14:59:05 compute-0 edpm-start-podman-container[144874]: ovn_controller
Feb 01 14:59:05 compute-0 systemd[1]: Created slice User Slice of UID 0.
Feb 01 14:59:05 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Feb 01 14:59:05 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Feb 01 14:59:05 compute-0 edpm-start-podman-container[144873]: Creating additional drop-in dependency for "ovn_controller" (f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16)
Feb 01 14:59:05 compute-0 podman[144897]: 2026-02-01 14:59:05.72177507 +0000 UTC m=+0.053840084 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 01 14:59:05 compute-0 systemd[1]: Starting User Manager for UID 0...
Feb 01 14:59:05 compute-0 systemd[1]: f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16-40aedf39e97a9789.service: Main process exited, code=exited, status=1/FAILURE
Feb 01 14:59:05 compute-0 systemd[1]: f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16-40aedf39e97a9789.service: Failed with result 'exit-code'.
Feb 01 14:59:05 compute-0 systemd[1]: Reloading.
Feb 01 14:59:05 compute-0 systemd-sysv-generator[144964]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:59:05 compute-0 systemd-rc-local-generator[144961]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:59:05 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:05 compute-0 systemd[1]: Started ovn_controller container.
Feb 01 14:59:05 compute-0 systemd[144931]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Feb 01 14:59:05 compute-0 sudo[144832]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:06 compute-0 systemd[144931]: Queued start job for default target Main User Target.
Feb 01 14:59:06 compute-0 systemd[144931]: Created slice User Application Slice.
Feb 01 14:59:06 compute-0 systemd[144931]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Feb 01 14:59:06 compute-0 systemd[144931]: Started Daily Cleanup of User's Temporary Directories.
Feb 01 14:59:06 compute-0 systemd[144931]: Reached target Paths.
Feb 01 14:59:06 compute-0 systemd[144931]: Reached target Timers.
Feb 01 14:59:06 compute-0 systemd[144931]: Starting D-Bus User Message Bus Socket...
Feb 01 14:59:06 compute-0 systemd[144931]: Starting Create User's Volatile Files and Directories...
Feb 01 14:59:06 compute-0 systemd[144931]: Finished Create User's Volatile Files and Directories.
Feb 01 14:59:06 compute-0 systemd[144931]: Listening on D-Bus User Message Bus Socket.
Feb 01 14:59:06 compute-0 systemd[144931]: Reached target Sockets.
Feb 01 14:59:06 compute-0 systemd[144931]: Reached target Basic System.
Feb 01 14:59:06 compute-0 systemd[144931]: Reached target Main User Target.
Feb 01 14:59:06 compute-0 systemd[144931]: Startup finished in 143ms.
Feb 01 14:59:06 compute-0 systemd[1]: Started User Manager for UID 0.
Feb 01 14:59:06 compute-0 systemd[1]: Started Session c1 of User root.
Feb 01 14:59:06 compute-0 ovn_controller[144890]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 01 14:59:06 compute-0 ovn_controller[144890]: INFO:__main__:Validating config file
Feb 01 14:59:06 compute-0 ovn_controller[144890]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 01 14:59:06 compute-0 ovn_controller[144890]: INFO:__main__:Writing out command to execute
Feb 01 14:59:06 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Feb 01 14:59:06 compute-0 ovn_controller[144890]: ++ cat /run_command
Feb 01 14:59:06 compute-0 ovn_controller[144890]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb 01 14:59:06 compute-0 ovn_controller[144890]: + ARGS=
Feb 01 14:59:06 compute-0 ovn_controller[144890]: + sudo kolla_copy_cacerts
Feb 01 14:59:06 compute-0 systemd[1]: Started Session c2 of User root.
Feb 01 14:59:06 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Feb 01 14:59:06 compute-0 ovn_controller[144890]: + [[ ! -n '' ]]
Feb 01 14:59:06 compute-0 ovn_controller[144890]: + . kolla_extend_start
Feb 01 14:59:06 compute-0 ovn_controller[144890]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb 01 14:59:06 compute-0 ovn_controller[144890]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Feb 01 14:59:06 compute-0 ovn_controller[144890]: + umask 0022
Feb 01 14:59:06 compute-0 ovn_controller[144890]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Feb 01 14:59:06 compute-0 NetworkManager[48987]: <info>  [1769957946.2702] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Feb 01 14:59:06 compute-0 NetworkManager[48987]: <info>  [1769957946.2711] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 01 14:59:06 compute-0 NetworkManager[48987]: <warn>  [1769957946.2714] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 01 14:59:06 compute-0 NetworkManager[48987]: <info>  [1769957946.2723] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Feb 01 14:59:06 compute-0 NetworkManager[48987]: <info>  [1769957946.2730] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Feb 01 14:59:06 compute-0 NetworkManager[48987]: <info>  [1769957946.2735] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 01 14:59:06 compute-0 kernel: br-int: entered promiscuous mode
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00014|main|INFO|OVS feature set changed, force recompute.
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00022|main|INFO|OVS feature set changed, force recompute.
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 01 14:59:06 compute-0 ovn_controller[144890]: 2026-02-01T14:59:06Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 01 14:59:06 compute-0 NetworkManager[48987]: <info>  [1769957946.3066] manager: (ovn-492978-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Feb 01 14:59:06 compute-0 systemd-udevd[145075]: Network interface NamePolicy= disabled on kernel command line.
Feb 01 14:59:06 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Feb 01 14:59:06 compute-0 systemd-udevd[145076]: Network interface NamePolicy= disabled on kernel command line.
Feb 01 14:59:06 compute-0 NetworkManager[48987]: <info>  [1769957946.3225] device (genev_sys_6081): carrier: link connected
Feb 01 14:59:06 compute-0 NetworkManager[48987]: <info>  [1769957946.3229] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Feb 01 14:59:06 compute-0 python3.9[145153]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 01 14:59:07 compute-0 ceph-mon[75179]: pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:07 compute-0 sudo[145303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nntgyfcrcwacocsqhhxxxawfgoyprhsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957947.3504937-619-268364421984899/AnsiballZ_stat.py'
Feb 01 14:59:07 compute-0 sudo[145303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:07 compute-0 python3.9[145305]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:59:07 compute-0 sudo[145303]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:07 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:08 compute-0 sudo[145426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rapvtvmfgvalceadipxjtzhmbxwgxzyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957947.3504937-619-268364421984899/AnsiballZ_copy.py'
Feb 01 14:59:08 compute-0 sudo[145426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:08 compute-0 python3.9[145428]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957947.3504937-619-268364421984899/.source.yaml _original_basename=.tijtmdip follow=False checksum=71f291fd641d85e2615dba61e77205902aaa93d5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:59:08 compute-0 sudo[145426]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:08 compute-0 sudo[145578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcbzjdqfcqiryrcgyogrfapdcsmlgiba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957948.6246703-634-39617017939902/AnsiballZ_command.py'
Feb 01 14:59:08 compute-0 sudo[145578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:09 compute-0 python3.9[145580]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:59:09 compute-0 ovs-vsctl[145581]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Feb 01 14:59:09 compute-0 sudo[145578]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:09 compute-0 ceph-mon[75179]: pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:09 compute-0 sudo[145731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezkxkgkxsrzbungurbquevglzsbaziis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957949.243726-642-83516160466418/AnsiballZ_command.py'
Feb 01 14:59:09 compute-0 sudo[145731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:09 compute-0 python3.9[145733]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:59:09 compute-0 ovs-vsctl[145735]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Feb 01 14:59:09 compute-0 sudo[145731]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:09 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.484547) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957950484618, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1712, "num_deletes": 252, "total_data_size": 2490423, "memory_usage": 2539640, "flush_reason": "Manual Compaction"}
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957950490897, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1456056, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7300, "largest_seqno": 9011, "table_properties": {"data_size": 1450321, "index_size": 2618, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16813, "raw_average_key_size": 21, "raw_value_size": 1436755, "raw_average_value_size": 1795, "num_data_blocks": 123, "num_entries": 800, "num_filter_entries": 800, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957790, "oldest_key_time": 1769957790, "file_creation_time": 1769957950, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 6379 microseconds, and 2687 cpu microseconds.
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.490934) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1456056 bytes OK
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.490948) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.492471) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.492483) EVENT_LOG_v1 {"time_micros": 1769957950492479, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.492499) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2482747, prev total WAL file size 2482747, number of live WAL files 2.
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.492988) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1421KB)], [20(7515KB)]
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957950493035, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9151649, "oldest_snapshot_seqno": -1}
Feb 01 14:59:10 compute-0 sudo[145886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqjjimrnejybxnqtlfqvyvdurpiiwvtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957950.2482698-656-185396286188077/AnsiballZ_command.py'
Feb 01 14:59:10 compute-0 sudo[145886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3387 keys, 7114934 bytes, temperature: kUnknown
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957950520005, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7114934, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7088907, "index_size": 16445, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 80951, "raw_average_key_size": 23, "raw_value_size": 7024341, "raw_average_value_size": 2073, "num_data_blocks": 730, "num_entries": 3387, "num_filter_entries": 3387, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769957950, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.520277) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7114934 bytes
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.521697) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 338.4 rd, 263.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 7.3 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(11.2) write-amplify(4.9) OK, records in: 3828, records dropped: 441 output_compression: NoCompression
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.521726) EVENT_LOG_v1 {"time_micros": 1769957950521712, "job": 6, "event": "compaction_finished", "compaction_time_micros": 27042, "compaction_time_cpu_micros": 11406, "output_level": 6, "num_output_files": 1, "total_output_size": 7114934, "num_input_records": 3828, "num_output_records": 3387, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957950522027, "job": 6, "event": "table_file_deletion", "file_number": 22}
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769957950523171, "job": 6, "event": "table_file_deletion", "file_number": 20}
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.492894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.523264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.523271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.523274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.523277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 14:59:10 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-14:59:10.523280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 14:59:10 compute-0 python3.9[145888]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 14:59:10 compute-0 ovs-vsctl[145889]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Feb 01 14:59:10 compute-0 sudo[145886]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:11 compute-0 sshd-session[133698]: Connection closed by 192.168.122.30 port 37556
Feb 01 14:59:11 compute-0 sshd-session[133695]: pam_unix(sshd:session): session closed for user zuul
Feb 01 14:59:11 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Feb 01 14:59:11 compute-0 systemd[1]: session-45.scope: Consumed 48.844s CPU time.
Feb 01 14:59:11 compute-0 systemd-logind[786]: Session 45 logged out. Waiting for processes to exit.
Feb 01 14:59:11 compute-0 systemd-logind[786]: Removed session 45.
Feb 01 14:59:11 compute-0 ceph-mon[75179]: pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:11 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:13 compute-0 ceph-mon[75179]: pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:13 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:15 compute-0 ceph-mon[75179]: pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:59:15 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:16 compute-0 sshd-session[145915]: Accepted publickey for zuul from 192.168.122.30 port 42566 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 14:59:16 compute-0 systemd-logind[786]: New session 47 of user zuul.
Feb 01 14:59:16 compute-0 systemd[1]: Started Session 47 of User zuul.
Feb 01 14:59:16 compute-0 sshd-session[145915]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 14:59:16 compute-0 systemd[1]: Stopping User Manager for UID 0...
Feb 01 14:59:16 compute-0 systemd[144931]: Activating special unit Exit the Session...
Feb 01 14:59:16 compute-0 systemd[144931]: Stopped target Main User Target.
Feb 01 14:59:16 compute-0 systemd[144931]: Stopped target Basic System.
Feb 01 14:59:16 compute-0 systemd[144931]: Stopped target Paths.
Feb 01 14:59:16 compute-0 systemd[144931]: Stopped target Sockets.
Feb 01 14:59:16 compute-0 systemd[144931]: Stopped target Timers.
Feb 01 14:59:16 compute-0 systemd[144931]: Stopped Daily Cleanup of User's Temporary Directories.
Feb 01 14:59:16 compute-0 systemd[144931]: Closed D-Bus User Message Bus Socket.
Feb 01 14:59:16 compute-0 systemd[144931]: Stopped Create User's Volatile Files and Directories.
Feb 01 14:59:16 compute-0 systemd[144931]: Removed slice User Application Slice.
Feb 01 14:59:16 compute-0 systemd[144931]: Reached target Shutdown.
Feb 01 14:59:16 compute-0 systemd[144931]: Finished Exit the Session.
Feb 01 14:59:16 compute-0 systemd[144931]: Reached target Exit the Session.
Feb 01 14:59:16 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Feb 01 14:59:16 compute-0 systemd[1]: Stopped User Manager for UID 0.
Feb 01 14:59:16 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Feb 01 14:59:16 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Feb 01 14:59:16 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Feb 01 14:59:16 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Feb 01 14:59:16 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Feb 01 14:59:17 compute-0 python3.9[146070]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:59:17 compute-0 ceph-mon[75179]: pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_14:59:17
Feb 01 14:59:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 14:59:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 14:59:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['vms', 'volumes', '.rgw.root', '.mgr', 'backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log']
Feb 01 14:59:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 14:59:17 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:18 compute-0 sudo[146224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dckwxgdnnzayyvcnxcqcyrmnpqnveyol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957957.6074922-29-259524362637272/AnsiballZ_file.py'
Feb 01 14:59:18 compute-0 sudo[146224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:18 compute-0 python3.9[146226]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:59:18 compute-0 sudo[146224]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:59:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:59:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:59:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:59:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:59:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:59:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 14:59:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 14:59:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 14:59:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 14:59:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 14:59:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 14:59:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 14:59:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 14:59:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 14:59:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 14:59:18 compute-0 sudo[146376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dznzlbjuuxhhchpucyogpzkjfuqfdqnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957958.3887863-29-235651829217650/AnsiballZ_file.py'
Feb 01 14:59:18 compute-0 sudo[146376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:18 compute-0 python3.9[146378]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:59:18 compute-0 sudo[146376]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:19 compute-0 sudo[146528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbntlsknnphqntrjavhqnibztsfoveff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957958.977485-29-175347217969645/AnsiballZ_file.py'
Feb 01 14:59:19 compute-0 sudo[146528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:19 compute-0 ceph-mon[75179]: pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:19 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:20 compute-0 python3.9[146530]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:59:20 compute-0 sudo[146528]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:20 compute-0 sudo[146691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqpjelnjfnjpwoafgxajbajxxdaqpziv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957960.153276-29-265428226802300/AnsiballZ_file.py'
Feb 01 14:59:20 compute-0 sudo[146691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:20 compute-0 ceph-mon[75179]: pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:59:20 compute-0 python3.9[146693]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:59:20 compute-0 sudo[146691]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:20 compute-0 sudo[146843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsytfeouivhtgasecsmtgdfgvsjosrxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957960.673329-29-252824450880939/AnsiballZ_file.py'
Feb 01 14:59:20 compute-0 sudo[146843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:21 compute-0 python3.9[146845]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:59:21 compute-0 sudo[146843]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:21 compute-0 python3.9[146995]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 14:59:21 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:22 compute-0 sudo[147145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arskmbxticoelqoorloslzfyfvrseawu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957962.1083326-73-170262088701824/AnsiballZ_seboolean.py'
Feb 01 14:59:22 compute-0 sudo[147145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:22 compute-0 python3.9[147147]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb 01 14:59:22 compute-0 ceph-mon[75179]: pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:23 compute-0 sudo[147145]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:23 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:24 compute-0 python3.9[147297]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:59:24 compute-0 python3.9[147419]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957963.4326386-81-112061173049181/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:59:24 compute-0 ceph-mon[75179]: pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:25 compute-0 python3.9[147569]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:59:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:59:25 compute-0 python3.9[147690]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957964.948161-96-77290954161802/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:59:25 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:26 compute-0 sudo[147840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlnpoyyajavlqylwhfeczsjaeqqbrfxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957966.1075323-113-80907014886151/AnsiballZ_setup.py'
Feb 01 14:59:26 compute-0 sudo[147840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:26 compute-0 python3.9[147842]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 01 14:59:26 compute-0 sudo[147840]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:27 compute-0 ceph-mon[75179]: pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:27 compute-0 sudo[147924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbhbhynohfxafmbtmqgdjbbkomqtzkmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957966.1075323-113-80907014886151/AnsiballZ_dnf.py'
Feb 01 14:59:27 compute-0 sudo[147924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:27 compute-0 python3.9[147926]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 14:59:27 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 14:59:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:59:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 14:59:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:59:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:59:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:59:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:59:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:59:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:59:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:59:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:59:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:59:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb 01 14:59:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:59:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:59:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:59:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 14:59:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:59:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 14:59:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:59:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 14:59:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 14:59:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 14:59:28 compute-0 sudo[147928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:59:28 compute-0 sudo[147928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:59:28 compute-0 sudo[147928]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:28 compute-0 sudo[147953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 14:59:28 compute-0 sudo[147953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:59:28 compute-0 sudo[147953]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:59:28 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:59:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 14:59:28 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:59:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 14:59:28 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:59:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 14:59:28 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:59:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 14:59:28 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:59:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 14:59:28 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:59:28 compute-0 sudo[148008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:59:28 compute-0 sudo[148008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:59:28 compute-0 sudo[148008]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:28 compute-0 sudo[147924]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:28 compute-0 sudo[148033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 14:59:28 compute-0 sudo[148033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:59:29 compute-0 ceph-mon[75179]: pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:29 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:59:29 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 14:59:29 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:59:29 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 14:59:29 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 14:59:29 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 14:59:29 compute-0 podman[148127]: 2026-02-01 14:59:29.063729488 +0000 UTC m=+0.041688242 container create 7185b3a91ccfb212bdfde7cc85b1274d8ef2a4c894d91fa0a9cded7f049fc664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_snyder, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Feb 01 14:59:29 compute-0 systemd[1]: Started libpod-conmon-7185b3a91ccfb212bdfde7cc85b1274d8ef2a4c894d91fa0a9cded7f049fc664.scope.
Feb 01 14:59:29 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:59:29 compute-0 podman[148127]: 2026-02-01 14:59:29.128975363 +0000 UTC m=+0.106934217 container init 7185b3a91ccfb212bdfde7cc85b1274d8ef2a4c894d91fa0a9cded7f049fc664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:59:29 compute-0 podman[148127]: 2026-02-01 14:59:29.134251763 +0000 UTC m=+0.112210537 container start 7185b3a91ccfb212bdfde7cc85b1274d8ef2a4c894d91fa0a9cded7f049fc664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_snyder, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 01 14:59:29 compute-0 podman[148127]: 2026-02-01 14:59:29.043973104 +0000 UTC m=+0.021931888 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:59:29 compute-0 focused_snyder[148160]: 167 167
Feb 01 14:59:29 compute-0 systemd[1]: libpod-7185b3a91ccfb212bdfde7cc85b1274d8ef2a4c894d91fa0a9cded7f049fc664.scope: Deactivated successfully.
Feb 01 14:59:29 compute-0 conmon[148160]: conmon 7185b3a91ccfb212bdfd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7185b3a91ccfb212bdfde7cc85b1274d8ef2a4c894d91fa0a9cded7f049fc664.scope/container/memory.events
Feb 01 14:59:29 compute-0 podman[148127]: 2026-02-01 14:59:29.139195325 +0000 UTC m=+0.117154129 container attach 7185b3a91ccfb212bdfde7cc85b1274d8ef2a4c894d91fa0a9cded7f049fc664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_snyder, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:59:29 compute-0 podman[148127]: 2026-02-01 14:59:29.139621727 +0000 UTC m=+0.117580521 container died 7185b3a91ccfb212bdfde7cc85b1274d8ef2a4c894d91fa0a9cded7f049fc664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 01 14:59:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-514f040d4e13b80901541c1fbd2ac630963e8afd6811d2615b4014e7b53e932c-merged.mount: Deactivated successfully.
Feb 01 14:59:29 compute-0 podman[148127]: 2026-02-01 14:59:29.182258376 +0000 UTC m=+0.160217130 container remove 7185b3a91ccfb212bdfde7cc85b1274d8ef2a4c894d91fa0a9cded7f049fc664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_snyder, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:59:29 compute-0 systemd[1]: libpod-conmon-7185b3a91ccfb212bdfde7cc85b1274d8ef2a4c894d91fa0a9cded7f049fc664.scope: Deactivated successfully.
Feb 01 14:59:29 compute-0 podman[148184]: 2026-02-01 14:59:29.316060469 +0000 UTC m=+0.036076112 container create aba59b3d05856a69ec24928ed130823047103a5f6a5d6881c6d98a693327609b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_galois, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:59:29 compute-0 systemd[1]: Started libpod-conmon-aba59b3d05856a69ec24928ed130823047103a5f6a5d6881c6d98a693327609b.scope.
Feb 01 14:59:29 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:59:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3478fe5d6e836fec57fcd2e64bd9398b78a1ec45c91ce7ab69106528bd3d0e06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:59:29 compute-0 podman[148184]: 2026-02-01 14:59:29.300804983 +0000 UTC m=+0.020820636 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:59:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3478fe5d6e836fec57fcd2e64bd9398b78a1ec45c91ce7ab69106528bd3d0e06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:59:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3478fe5d6e836fec57fcd2e64bd9398b78a1ec45c91ce7ab69106528bd3d0e06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:59:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3478fe5d6e836fec57fcd2e64bd9398b78a1ec45c91ce7ab69106528bd3d0e06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:59:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3478fe5d6e836fec57fcd2e64bd9398b78a1ec45c91ce7ab69106528bd3d0e06/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 14:59:29 compute-0 podman[148184]: 2026-02-01 14:59:29.431861319 +0000 UTC m=+0.151877032 container init aba59b3d05856a69ec24928ed130823047103a5f6a5d6881c6d98a693327609b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_galois, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 01 14:59:29 compute-0 podman[148184]: 2026-02-01 14:59:29.441683079 +0000 UTC m=+0.161698742 container start aba59b3d05856a69ec24928ed130823047103a5f6a5d6881c6d98a693327609b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_galois, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 01 14:59:29 compute-0 podman[148184]: 2026-02-01 14:59:29.445456117 +0000 UTC m=+0.165471800 container attach aba59b3d05856a69ec24928ed130823047103a5f6a5d6881c6d98a693327609b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_galois, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:59:29 compute-0 sudo[148281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkbqxuxdtvjoslzhyusoimdidkqnspct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957968.9514806-125-215753189696752/AnsiballZ_systemd.py'
Feb 01 14:59:29 compute-0 sudo[148281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:29 compute-0 nifty_galois[148201]: --> passed data devices: 0 physical, 3 LVM
Feb 01 14:59:29 compute-0 nifty_galois[148201]: --> All data devices are unavailable
Feb 01 14:59:29 compute-0 systemd[1]: libpod-aba59b3d05856a69ec24928ed130823047103a5f6a5d6881c6d98a693327609b.scope: Deactivated successfully.
Feb 01 14:59:29 compute-0 podman[148184]: 2026-02-01 14:59:29.837252364 +0000 UTC m=+0.557268027 container died aba59b3d05856a69ec24928ed130823047103a5f6a5d6881c6d98a693327609b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_galois, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 01 14:59:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-3478fe5d6e836fec57fcd2e64bd9398b78a1ec45c91ce7ab69106528bd3d0e06-merged.mount: Deactivated successfully.
Feb 01 14:59:29 compute-0 podman[148184]: 2026-02-01 14:59:29.886750779 +0000 UTC m=+0.606766422 container remove aba59b3d05856a69ec24928ed130823047103a5f6a5d6881c6d98a693327609b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:59:29 compute-0 python3.9[148283]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 01 14:59:29 compute-0 systemd[1]: libpod-conmon-aba59b3d05856a69ec24928ed130823047103a5f6a5d6881c6d98a693327609b.scope: Deactivated successfully.
Feb 01 14:59:29 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:29 compute-0 sudo[148033]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:29 compute-0 sudo[148281]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:29 compute-0 sudo[148313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:59:29 compute-0 sudo[148313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:59:29 compute-0 sudo[148313]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:30 compute-0 sudo[148341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 14:59:30 compute-0 sudo[148341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:59:30 compute-0 podman[148452]: 2026-02-01 14:59:30.280566273 +0000 UTC m=+0.052771169 container create f3ada13586630ac54651b4bd33215aa988ade9384d49016259e59a0829c9ce13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_rosalind, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:59:30 compute-0 systemd[1]: Started libpod-conmon-f3ada13586630ac54651b4bd33215aa988ade9384d49016259e59a0829c9ce13.scope.
Feb 01 14:59:30 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:59:30 compute-0 podman[148452]: 2026-02-01 14:59:30.259331836 +0000 UTC m=+0.031536812 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:59:30 compute-0 podman[148452]: 2026-02-01 14:59:30.365411678 +0000 UTC m=+0.137616644 container init f3ada13586630ac54651b4bd33215aa988ade9384d49016259e59a0829c9ce13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_rosalind, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 01 14:59:30 compute-0 podman[148452]: 2026-02-01 14:59:30.370536144 +0000 UTC m=+0.142741060 container start f3ada13586630ac54651b4bd33215aa988ade9384d49016259e59a0829c9ce13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_rosalind, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 14:59:30 compute-0 podman[148452]: 2026-02-01 14:59:30.374018444 +0000 UTC m=+0.146223380 container attach f3ada13586630ac54651b4bd33215aa988ade9384d49016259e59a0829c9ce13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_rosalind, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 14:59:30 compute-0 strange_rosalind[148491]: 167 167
Feb 01 14:59:30 compute-0 systemd[1]: libpod-f3ada13586630ac54651b4bd33215aa988ade9384d49016259e59a0829c9ce13.scope: Deactivated successfully.
Feb 01 14:59:30 compute-0 podman[148452]: 2026-02-01 14:59:30.376363721 +0000 UTC m=+0.148568657 container died f3ada13586630ac54651b4bd33215aa988ade9384d49016259e59a0829c9ce13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_rosalind, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 01 14:59:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f44242f6b7c3c7306fb356afc15fd74d7df65e77e04054904073b31d2fd4c0d1-merged.mount: Deactivated successfully.
Feb 01 14:59:30 compute-0 podman[148452]: 2026-02-01 14:59:30.420209644 +0000 UTC m=+0.192414560 container remove f3ada13586630ac54651b4bd33215aa988ade9384d49016259e59a0829c9ce13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 01 14:59:30 compute-0 systemd[1]: libpod-conmon-f3ada13586630ac54651b4bd33215aa988ade9384d49016259e59a0829c9ce13.scope: Deactivated successfully.
Feb 01 14:59:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:59:30 compute-0 podman[148565]: 2026-02-01 14:59:30.552608958 +0000 UTC m=+0.040196110 container create 5d39fbb331004c2f15b3035d458c4c8871b3298852072511f3b3717c1bbf5868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bardeen, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 01 14:59:30 compute-0 systemd[1]: Started libpod-conmon-5d39fbb331004c2f15b3035d458c4c8871b3298852072511f3b3717c1bbf5868.scope.
Feb 01 14:59:30 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48866f3585783b57b9e1d9977b54f207fe45041288dff0a626494c1ebcaad538/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48866f3585783b57b9e1d9977b54f207fe45041288dff0a626494c1ebcaad538/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48866f3585783b57b9e1d9977b54f207fe45041288dff0a626494c1ebcaad538/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48866f3585783b57b9e1d9977b54f207fe45041288dff0a626494c1ebcaad538/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:59:30 compute-0 podman[148565]: 2026-02-01 14:59:30.622702451 +0000 UTC m=+0.110289623 container init 5d39fbb331004c2f15b3035d458c4c8871b3298852072511f3b3717c1bbf5868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bardeen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Feb 01 14:59:30 compute-0 podman[148565]: 2026-02-01 14:59:30.627687203 +0000 UTC m=+0.115274335 container start 5d39fbb331004c2f15b3035d458c4c8871b3298852072511f3b3717c1bbf5868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:59:30 compute-0 podman[148565]: 2026-02-01 14:59:30.630684039 +0000 UTC m=+0.118271181 container attach 5d39fbb331004c2f15b3035d458c4c8871b3298852072511f3b3717c1bbf5868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bardeen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:59:30 compute-0 podman[148565]: 2026-02-01 14:59:30.537121605 +0000 UTC m=+0.024708767 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:59:30 compute-0 python3.9[148559]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:59:30 compute-0 boring_bardeen[148582]: {
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:     "0": [
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:         {
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "devices": [
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "/dev/loop3"
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             ],
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "lv_name": "ceph_lv0",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "lv_size": "21470642176",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "name": "ceph_lv0",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "tags": {
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.cluster_name": "ceph",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.crush_device_class": "",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.encrypted": "0",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.objectstore": "bluestore",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.osd_id": "0",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.type": "block",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.vdo": "0",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.with_tpm": "0"
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             },
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "type": "block",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "vg_name": "ceph_vg0"
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:         }
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:     ],
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:     "1": [
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:         {
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "devices": [
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "/dev/loop4"
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             ],
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "lv_name": "ceph_lv1",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "lv_size": "21470642176",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "name": "ceph_lv1",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "tags": {
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.cluster_name": "ceph",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.crush_device_class": "",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.encrypted": "0",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.objectstore": "bluestore",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.osd_id": "1",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.type": "block",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.vdo": "0",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.with_tpm": "0"
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             },
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "type": "block",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "vg_name": "ceph_vg1"
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:         }
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:     ],
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:     "2": [
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:         {
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "devices": [
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "/dev/loop5"
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             ],
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "lv_name": "ceph_lv2",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "lv_size": "21470642176",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "name": "ceph_lv2",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "tags": {
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.cluster_name": "ceph",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.crush_device_class": "",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.encrypted": "0",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.objectstore": "bluestore",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.osd_id": "2",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.type": "block",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.vdo": "0",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:                 "ceph.with_tpm": "0"
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             },
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "type": "block",
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:             "vg_name": "ceph_vg2"
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:         }
Feb 01 14:59:30 compute-0 boring_bardeen[148582]:     ]
Feb 01 14:59:30 compute-0 boring_bardeen[148582]: }
Feb 01 14:59:30 compute-0 systemd[1]: libpod-5d39fbb331004c2f15b3035d458c4c8871b3298852072511f3b3717c1bbf5868.scope: Deactivated successfully.
Feb 01 14:59:30 compute-0 podman[148565]: 2026-02-01 14:59:30.903117695 +0000 UTC m=+0.390704817 container died 5d39fbb331004c2f15b3035d458c4c8871b3298852072511f3b3717c1bbf5868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:59:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-48866f3585783b57b9e1d9977b54f207fe45041288dff0a626494c1ebcaad538-merged.mount: Deactivated successfully.
Feb 01 14:59:30 compute-0 podman[148565]: 2026-02-01 14:59:30.935920432 +0000 UTC m=+0.423507554 container remove 5d39fbb331004c2f15b3035d458c4c8871b3298852072511f3b3717c1bbf5868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bardeen, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:59:30 compute-0 systemd[1]: libpod-conmon-5d39fbb331004c2f15b3035d458c4c8871b3298852072511f3b3717c1bbf5868.scope: Deactivated successfully.
Feb 01 14:59:30 compute-0 sudo[148341]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:31 compute-0 sudo[148725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 14:59:31 compute-0 sudo[148725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:59:31 compute-0 sudo[148725]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:31 compute-0 ceph-mon[75179]: pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:31 compute-0 sudo[148750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 14:59:31 compute-0 sudo[148750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:59:31 compute-0 python3.9[148711]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957970.1504092-133-82653365186567/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:59:31 compute-0 podman[148839]: 2026-02-01 14:59:31.310084094 +0000 UTC m=+0.049529486 container create 70d63f6c1a193be27daa2fa6874621072bec0faf731aa5373e777c861b1dbdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gould, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:59:31 compute-0 systemd[1]: Started libpod-conmon-70d63f6c1a193be27daa2fa6874621072bec0faf731aa5373e777c861b1dbdc4.scope.
Feb 01 14:59:31 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:59:31 compute-0 podman[148839]: 2026-02-01 14:59:31.284134743 +0000 UTC m=+0.023580175 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:59:31 compute-0 podman[148839]: 2026-02-01 14:59:31.381845675 +0000 UTC m=+0.121291127 container init 70d63f6c1a193be27daa2fa6874621072bec0faf731aa5373e777c861b1dbdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gould, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 01 14:59:31 compute-0 podman[148839]: 2026-02-01 14:59:31.389500514 +0000 UTC m=+0.128945906 container start 70d63f6c1a193be27daa2fa6874621072bec0faf731aa5373e777c861b1dbdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gould, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 14:59:31 compute-0 podman[148839]: 2026-02-01 14:59:31.39358435 +0000 UTC m=+0.133029722 container attach 70d63f6c1a193be27daa2fa6874621072bec0faf731aa5373e777c861b1dbdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gould, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:59:31 compute-0 quizzical_gould[148881]: 167 167
Feb 01 14:59:31 compute-0 systemd[1]: libpod-70d63f6c1a193be27daa2fa6874621072bec0faf731aa5373e777c861b1dbdc4.scope: Deactivated successfully.
Feb 01 14:59:31 compute-0 conmon[148881]: conmon 70d63f6c1a193be27daa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-70d63f6c1a193be27daa2fa6874621072bec0faf731aa5373e777c861b1dbdc4.scope/container/memory.events
Feb 01 14:59:31 compute-0 podman[148839]: 2026-02-01 14:59:31.398711877 +0000 UTC m=+0.138157239 container died 70d63f6c1a193be27daa2fa6874621072bec0faf731aa5373e777c861b1dbdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:59:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-39d68d42f3b51597d7dcf55db2fc269cfb90c6ac72273e0c6d437e88108d8461-merged.mount: Deactivated successfully.
Feb 01 14:59:31 compute-0 podman[148839]: 2026-02-01 14:59:31.432379009 +0000 UTC m=+0.171824371 container remove 70d63f6c1a193be27daa2fa6874621072bec0faf731aa5373e777c861b1dbdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gould, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 14:59:31 compute-0 systemd[1]: libpod-conmon-70d63f6c1a193be27daa2fa6874621072bec0faf731aa5373e777c861b1dbdc4.scope: Deactivated successfully.
Feb 01 14:59:31 compute-0 podman[148978]: 2026-02-01 14:59:31.621764091 +0000 UTC m=+0.051982516 container create ebb4820bf400474b741133978c785424961832872ef7a4c65a3f89b9b8753a53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 14:59:31 compute-0 systemd[1]: Started libpod-conmon-ebb4820bf400474b741133978c785424961832872ef7a4c65a3f89b9b8753a53.scope.
Feb 01 14:59:31 compute-0 podman[148978]: 2026-02-01 14:59:31.596607182 +0000 UTC m=+0.026825657 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 14:59:31 compute-0 systemd[1]: Started libcrun container.
Feb 01 14:59:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c330be56737ea24f59d887146ea1f4442d3fb2dd483cae06c6cbfb8721c545/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 14:59:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c330be56737ea24f59d887146ea1f4442d3fb2dd483cae06c6cbfb8721c545/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 14:59:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c330be56737ea24f59d887146ea1f4442d3fb2dd483cae06c6cbfb8721c545/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 14:59:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c330be56737ea24f59d887146ea1f4442d3fb2dd483cae06c6cbfb8721c545/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 14:59:31 compute-0 podman[148978]: 2026-02-01 14:59:31.724486317 +0000 UTC m=+0.154704792 container init ebb4820bf400474b741133978c785424961832872ef7a4c65a3f89b9b8753a53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_dirac, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 01 14:59:31 compute-0 python3.9[148972]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:59:31 compute-0 podman[148978]: 2026-02-01 14:59:31.734688459 +0000 UTC m=+0.164906864 container start ebb4820bf400474b741133978c785424961832872ef7a4c65a3f89b9b8753a53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_dirac, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 01 14:59:31 compute-0 podman[148978]: 2026-02-01 14:59:31.73822963 +0000 UTC m=+0.168448055 container attach ebb4820bf400474b741133978c785424961832872ef7a4c65a3f89b9b8753a53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_dirac, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 14:59:31 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:32 compute-0 python3.9[149130]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957971.2335758-133-60088184149796/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:59:32 compute-0 lvm[149218]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 14:59:32 compute-0 lvm[149218]: VG ceph_vg0 finished
Feb 01 14:59:32 compute-0 lvm[149219]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 14:59:32 compute-0 lvm[149219]: VG ceph_vg1 finished
Feb 01 14:59:32 compute-0 lvm[149221]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 14:59:32 compute-0 lvm[149221]: VG ceph_vg2 finished
Feb 01 14:59:32 compute-0 sad_dirac[148995]: {}
Feb 01 14:59:32 compute-0 systemd[1]: libpod-ebb4820bf400474b741133978c785424961832872ef7a4c65a3f89b9b8753a53.scope: Deactivated successfully.
Feb 01 14:59:32 compute-0 systemd[1]: libpod-ebb4820bf400474b741133978c785424961832872ef7a4c65a3f89b9b8753a53.scope: Consumed 1.062s CPU time.
Feb 01 14:59:32 compute-0 podman[148978]: 2026-02-01 14:59:32.525228381 +0000 UTC m=+0.955446816 container died ebb4820bf400474b741133978c785424961832872ef7a4c65a3f89b9b8753a53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_dirac, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 14:59:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-29c330be56737ea24f59d887146ea1f4442d3fb2dd483cae06c6cbfb8721c545-merged.mount: Deactivated successfully.
Feb 01 14:59:32 compute-0 podman[148978]: 2026-02-01 14:59:32.579194373 +0000 UTC m=+1.009412798 container remove ebb4820bf400474b741133978c785424961832872ef7a4c65a3f89b9b8753a53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_dirac, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 01 14:59:32 compute-0 systemd[1]: libpod-conmon-ebb4820bf400474b741133978c785424961832872ef7a4c65a3f89b9b8753a53.scope: Deactivated successfully.
Feb 01 14:59:32 compute-0 sudo[148750]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:32 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 14:59:32 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:59:32 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 14:59:32 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:59:32 compute-0 sudo[149238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 14:59:32 compute-0 sudo[149238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 14:59:32 compute-0 sudo[149238]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:33 compute-0 ceph-mon[75179]: pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:33 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:59:33 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 14:59:33 compute-0 python3.9[149388]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:59:33 compute-0 python3.9[149509]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957972.9121368-177-116280454783839/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:59:33 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:34 compute-0 python3.9[149659]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:59:34 compute-0 python3.9[149780]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957973.9887934-177-222771691261912/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:59:35 compute-0 ceph-mon[75179]: pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:59:35 compute-0 python3.9[149930]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 14:59:35 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:35 compute-0 ovn_controller[144890]: 2026-02-01T14:59:35Z|00025|memory|INFO|16896 kB peak resident set size after 29.7 seconds
Feb 01 14:59:35 compute-0 ovn_controller[144890]: 2026-02-01T14:59:35Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Feb 01 14:59:36 compute-0 podman[150009]: 2026-02-01 14:59:36.027502917 +0000 UTC m=+0.109148760 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Feb 01 14:59:36 compute-0 sudo[150110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynsarlfhkywpqmflbcbditeazvchdynx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957975.817689-215-92731245654179/AnsiballZ_file.py'
Feb 01 14:59:36 compute-0 sudo[150110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:36 compute-0 python3.9[150112]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:59:36 compute-0 sudo[150110]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:36 compute-0 sudo[150262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqhzgyeeygkzwtkuxoteexzxbkxijvhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957976.5255392-223-141229432416469/AnsiballZ_stat.py'
Feb 01 14:59:36 compute-0 sudo[150262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:37 compute-0 python3.9[150264]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:59:37 compute-0 ceph-mon[75179]: pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:37 compute-0 sudo[150262]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:37 compute-0 sudo[150340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hompubpqvvnvfxjpvjyrinzmtlvuksyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957976.5255392-223-141229432416469/AnsiballZ_file.py'
Feb 01 14:59:37 compute-0 sudo[150340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:37 compute-0 python3.9[150342]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:59:37 compute-0 sudo[150340]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:37 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:37 compute-0 sudo[150492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jycdqwegkoagmbgtmfqevhvggkjislbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957977.6922917-223-5869945929307/AnsiballZ_stat.py'
Feb 01 14:59:37 compute-0 sudo[150492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:38 compute-0 python3.9[150494]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:59:38 compute-0 sudo[150492]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:38 compute-0 sudo[150570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrnuvybeznusktwdcurfinokyiagoscy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957977.6922917-223-5869945929307/AnsiballZ_file.py'
Feb 01 14:59:38 compute-0 sudo[150570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:38 compute-0 python3.9[150572]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:59:38 compute-0 sudo[150570]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:39 compute-0 ceph-mon[75179]: pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:39 compute-0 sudo[150722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxfyyndczgihvznzsvdynrgqfxiekmmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957978.8278103-246-193448198294166/AnsiballZ_file.py'
Feb 01 14:59:39 compute-0 sudo[150722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:39 compute-0 python3.9[150724]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:59:39 compute-0 sudo[150722]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:39 compute-0 sudo[150874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwhxephhgwjnqvessknlieobjhlzrqhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957979.4607484-254-191945420253547/AnsiballZ_stat.py'
Feb 01 14:59:39 compute-0 sudo[150874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:39 compute-0 python3.9[150876]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:59:39 compute-0 sudo[150874]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:39 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:40 compute-0 sudo[150952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nibmzwpwzjfvxznlyeqdnansaxipctke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957979.4607484-254-191945420253547/AnsiballZ_file.py'
Feb 01 14:59:40 compute-0 sudo[150952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:40 compute-0 python3.9[150954]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:59:40 compute-0 sudo[150952]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:59:40 compute-0 sudo[151104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isocrtzzejhgtlhktgcjoldgydjyjuaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957980.467675-266-107827880844906/AnsiballZ_stat.py'
Feb 01 14:59:40 compute-0 sudo[151104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:40 compute-0 python3.9[151106]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:59:40 compute-0 sudo[151104]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:41 compute-0 ceph-mon[75179]: pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:41 compute-0 sudo[151182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyvvbjimpfsoddwyvnyptdcjuwtblqtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957980.467675-266-107827880844906/AnsiballZ_file.py'
Feb 01 14:59:41 compute-0 sudo[151182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:41 compute-0 python3.9[151184]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:59:41 compute-0 sudo[151182]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:41 compute-0 sudo[151334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckahyiedwvuodtxgpchonrdhkvqnoipo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957981.4931889-278-279956113746897/AnsiballZ_systemd.py'
Feb 01 14:59:41 compute-0 sudo[151334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:41 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:41 compute-0 python3.9[151336]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 14:59:41 compute-0 systemd[1]: Reloading.
Feb 01 14:59:42 compute-0 systemd-rc-local-generator[151365]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:59:42 compute-0 systemd-sysv-generator[151368]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:59:42 compute-0 sudo[151334]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:42 compute-0 sudo[151524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrlmdplvtqjiexdrngmclzksikqgalgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957982.394851-286-24385448186225/AnsiballZ_stat.py'
Feb 01 14:59:42 compute-0 sudo[151524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:42 compute-0 python3.9[151526]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:59:42 compute-0 sudo[151524]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:43 compute-0 sudo[151602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mugxjtspbutzhzddpbzgieuzvcxhemmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957982.394851-286-24385448186225/AnsiballZ_file.py'
Feb 01 14:59:43 compute-0 sudo[151602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:43 compute-0 ceph-mon[75179]: pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:43 compute-0 python3.9[151604]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:59:43 compute-0 sudo[151602]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:43 compute-0 sudo[151754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrfdfcqzjzdihgkklolokxcdmpjxmwum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957983.3738053-298-256761874333453/AnsiballZ_stat.py'
Feb 01 14:59:43 compute-0 sudo[151754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:43 compute-0 python3.9[151756]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:59:43 compute-0 sudo[151754]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:43 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:44 compute-0 sudo[151832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waxtyurhmfedqtnlpwsokuueutthgaav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957983.3738053-298-256761874333453/AnsiballZ_file.py'
Feb 01 14:59:44 compute-0 sudo[151832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:44 compute-0 python3.9[151834]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:59:44 compute-0 sudo[151832]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:44 compute-0 sudo[151984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olgsspvrklblehvsxmmwcbrempyjlreu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957984.5039685-310-211504379673642/AnsiballZ_systemd.py'
Feb 01 14:59:44 compute-0 sudo[151984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:45 compute-0 ceph-mon[75179]: pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:45 compute-0 python3.9[151986]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 14:59:45 compute-0 systemd[1]: Reloading.
Feb 01 14:59:45 compute-0 systemd-sysv-generator[152017]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 14:59:45 compute-0 systemd-rc-local-generator[152012]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 14:59:45 compute-0 systemd[1]: Starting Create netns directory...
Feb 01 14:59:45 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 01 14:59:45 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 01 14:59:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:59:45 compute-0 systemd[1]: Finished Create netns directory.
Feb 01 14:59:45 compute-0 sudo[151984]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:45 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:46 compute-0 sudo[152178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wffqoerdftkptuaejwipmeekiplhelju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957985.7789168-320-90290012811085/AnsiballZ_file.py'
Feb 01 14:59:46 compute-0 sudo[152178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:46 compute-0 python3.9[152180]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:59:46 compute-0 sudo[152178]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:46 compute-0 sudo[152330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kncmsgjvezqegvrsnwuvvsawczsrtsrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957986.5606518-328-249842516700950/AnsiballZ_stat.py'
Feb 01 14:59:46 compute-0 sudo[152330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:47 compute-0 python3.9[152332]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:59:47 compute-0 sudo[152330]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:47 compute-0 ceph-mon[75179]: pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:47 compute-0 sudo[152453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tabbjlfzceydnwoonudmmomuyfqhmtfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957986.5606518-328-249842516700950/AnsiballZ_copy.py'
Feb 01 14:59:47 compute-0 sudo[152453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:47 compute-0 python3.9[152455]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769957986.5606518-328-249842516700950/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:59:47 compute-0 sudo[152453]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:47 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:48 compute-0 sudo[152605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygwlwlopjhaardvpzxovanlbcxqwhofp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957988.0821548-345-199382477815797/AnsiballZ_file.py'
Feb 01 14:59:48 compute-0 sudo[152605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:59:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:59:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:59:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:59:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 14:59:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 14:59:48 compute-0 python3.9[152607]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:59:48 compute-0 sudo[152605]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:49 compute-0 sudo[152757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhejqgusmhiyvqlkdikbgkmuosxdyosq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957988.8072348-353-120811068518257/AnsiballZ_file.py'
Feb 01 14:59:49 compute-0 sudo[152757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:49 compute-0 ceph-mon[75179]: pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:49 compute-0 python3.9[152759]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 01 14:59:49 compute-0 sudo[152757]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:49 compute-0 sudo[152909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nggvdqyvbmopclhbdmcwbiuxsrhidzmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957989.555074-361-206525969494613/AnsiballZ_stat.py'
Feb 01 14:59:49 compute-0 sudo[152909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:49 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:50 compute-0 python3.9[152911]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 14:59:50 compute-0 sudo[152909]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:50 compute-0 sudo[153032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juknavwzymlzuibxywfguorsnsugduxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957989.555074-361-206525969494613/AnsiballZ_copy.py'
Feb 01 14:59:50 compute-0 sudo[153032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:59:50 compute-0 python3.9[153034]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769957989.555074-361-206525969494613/.source.json _original_basename=.yhu2ina2 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:59:50 compute-0 sudo[153032]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:51 compute-0 ceph-mon[75179]: pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:51 compute-0 python3.9[153184]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 14:59:51 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:53 compute-0 sudo[153605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptbhgffdwjrpttyijieswartxelpgguv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957992.7116349-401-248519691875752/AnsiballZ_container_config_data.py'
Feb 01 14:59:53 compute-0 sudo[153605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:53 compute-0 ceph-mon[75179]: pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:53 compute-0 python3.9[153607]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Feb 01 14:59:53 compute-0 sudo[153605]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:53 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:54 compute-0 sudo[153757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onnfzmyaxzfqyjpmgazryglfpsrgkvhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769957993.731143-412-51257179033710/AnsiballZ_container_config_hash.py'
Feb 01 14:59:54 compute-0 sudo[153757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:54 compute-0 python3.9[153759]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 01 14:59:54 compute-0 sudo[153757]: pam_unix(sudo:session): session closed for user root
Feb 01 14:59:55 compute-0 ceph-mon[75179]: pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:55 compute-0 sudo[153909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpupyazepgzpyrlosjivqkjijevosqic ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769957994.7589798-422-279060638947718/AnsiballZ_edpm_container_manage.py'
Feb 01 14:59:55 compute-0 sudo[153909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 14:59:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 14:59:55 compute-0 python3[153911]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Feb 01 14:59:55 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:57 compute-0 ceph-mon[75179]: pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:57 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:59 compute-0 ceph-mon[75179]: pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 14:59:59 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:00 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 15:00:00 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2097 writes, 9242 keys, 2097 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2097 writes, 2097 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2097 writes, 9242 keys, 2097 commit groups, 1.0 writes per commit group, ingest: 12.29 MB, 0.02 MB/s
                                           Interval WAL: 2097 writes, 2097 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    160.3      0.05              0.02         3    0.018       0      0       0.0       0.0
                                             L6      1/0    6.79 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    236.7    207.5      0.07              0.03         2    0.034    7145    730       0.0       0.0
                                            Sum      1/0    6.79 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    131.2    186.5      0.12              0.06         5    0.025    7145    730       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    137.0    194.2      0.12              0.06         4    0.029    7145    730       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    236.7    207.5      0.07              0.03         2    0.034    7145    730       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    176.1      0.05              0.02         2    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.009, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.1 seconds
                                           Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5635c5d4b8d0#2 capacity: 308.00 MB usage: 636.55 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(38,545.27 KB,0.172885%) FilterBlock(6,27.86 KB,0.00883325%) IndexBlock(6,63.42 KB,0.0201089%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 01 15:00:00 compute-0 ceph-mon[75179]: pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:00:01 compute-0 podman[153925]: 2026-02-01 15:00:01.666804817 +0000 UTC m=+6.053836018 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 01 15:00:01 compute-0 podman[154075]: 2026-02-01 15:00:01.792722575 +0000 UTC m=+0.053792868 container create 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 01 15:00:01 compute-0 podman[154075]: 2026-02-01 15:00:01.768798481 +0000 UTC m=+0.029868744 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 01 15:00:01 compute-0 python3[153911]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 01 15:00:01 compute-0 sudo[153909]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:01 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:02 compute-0 sudo[154263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oifgayjdmeopchrpdbhnvcdynyrbvzqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958002.085446-430-137275481576982/AnsiballZ_stat.py'
Feb 01 15:00:02 compute-0 sudo[154263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:02 compute-0 python3.9[154265]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 15:00:02 compute-0 sudo[154263]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:02 compute-0 ceph-mon[75179]: pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:03 compute-0 sudo[154417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhhwpnsctnqvqdhbgbtydypoyomagequ ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958002.7611372-439-84635257712671/AnsiballZ_file.py'
Feb 01 15:00:03 compute-0 sudo[154417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:03 compute-0 python3.9[154419]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:00:03 compute-0 sudo[154417]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:03 compute-0 sudo[154493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrycrcxhnaiqichikeqhtypbsrnotgsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958002.7611372-439-84635257712671/AnsiballZ_stat.py'
Feb 01 15:00:03 compute-0 sudo[154493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:03 compute-0 python3.9[154495]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 15:00:03 compute-0 sudo[154493]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:03 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:04 compute-0 sudo[154644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oamflsggoocgrczeopnznewiwjvhipco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958003.6657736-439-78844630350748/AnsiballZ_copy.py'
Feb 01 15:00:04 compute-0 sudo[154644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:04 compute-0 python3.9[154646]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769958003.6657736-439-78844630350748/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:00:04 compute-0 sudo[154644]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:04 compute-0 sudo[154720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaaxrtjiozxlotpoklmpjiouisqgozzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958003.6657736-439-78844630350748/AnsiballZ_systemd.py'
Feb 01 15:00:04 compute-0 sudo[154720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:04 compute-0 python3.9[154722]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 01 15:00:04 compute-0 systemd[1]: Reloading.
Feb 01 15:00:04 compute-0 systemd-sysv-generator[154751]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:00:04 compute-0 systemd-rc-local-generator[154746]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:00:04 compute-0 ceph-mon[75179]: pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:05 compute-0 sudo[154720]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:05 compute-0 sudo[154830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biljpvauenphdzdtqrkxibhzctyeokip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958003.6657736-439-78844630350748/AnsiballZ_systemd.py'
Feb 01 15:00:05 compute-0 sudo[154830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:00:05 compute-0 python3.9[154832]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 15:00:05 compute-0 systemd[1]: Reloading.
Feb 01 15:00:05 compute-0 systemd-sysv-generator[154862]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:00:05 compute-0 systemd-rc-local-generator[154858]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:00:05 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Feb 01 15:00:05 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:06 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:00:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6a638621d1807aa58f3c5aaf543bfcc60f34f23a3c0997ac8a2414e38b0938/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Feb 01 15:00:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6a638621d1807aa58f3c5aaf543bfcc60f34f23a3c0997ac8a2414e38b0938/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 01 15:00:06 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815.
Feb 01 15:00:06 compute-0 podman[154874]: 2026-02-01 15:00:06.055654442 +0000 UTC m=+0.138240082 container init 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: + sudo -E kolla_set_configs
Feb 01 15:00:06 compute-0 podman[154874]: 2026-02-01 15:00:06.081560182 +0000 UTC m=+0.164145832 container start 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 01 15:00:06 compute-0 edpm-start-podman-container[154874]: ovn_metadata_agent
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: INFO:__main__:Validating config file
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: INFO:__main__:Copying service configuration files
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: INFO:__main__:Writing out command to execute
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: INFO:__main__:Setting permission for /var/lib/neutron
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: INFO:__main__:Setting permission for /var/lib/neutron/external
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: ++ cat /run_command
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: + CMD=neutron-ovn-metadata-agent
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: + ARGS=
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: + sudo kolla_copy_cacerts
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: + [[ ! -n '' ]]
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: + . kolla_extend_start
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: Running command: 'neutron-ovn-metadata-agent'
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: + umask 0022
Feb 01 15:00:06 compute-0 ovn_metadata_agent[154890]: + exec neutron-ovn-metadata-agent
Feb 01 15:00:06 compute-0 edpm-start-podman-container[154873]: Creating additional drop-in dependency for "ovn_metadata_agent" (1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815)
Feb 01 15:00:06 compute-0 podman[154894]: 2026-02-01 15:00:06.177389731 +0000 UTC m=+0.120763312 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 01 15:00:06 compute-0 podman[154906]: 2026-02-01 15:00:06.177270657 +0000 UTC m=+0.088719976 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 01 15:00:06 compute-0 systemd[1]: Reloading.
Feb 01 15:00:06 compute-0 systemd-sysv-generator[154994]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:00:06 compute-0 systemd-rc-local-generator[154989]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:00:06 compute-0 systemd[1]: Started ovn_metadata_agent container.
Feb 01 15:00:06 compute-0 sudo[154830]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:07 compute-0 ceph-mon[75179]: pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.011024) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958007011053, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 686, "num_deletes": 251, "total_data_size": 854934, "memory_usage": 866936, "flush_reason": "Manual Compaction"}
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958007016259, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 847505, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9012, "largest_seqno": 9697, "table_properties": {"data_size": 843899, "index_size": 1450, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7816, "raw_average_key_size": 18, "raw_value_size": 836709, "raw_average_value_size": 1982, "num_data_blocks": 67, "num_entries": 422, "num_filter_entries": 422, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957951, "oldest_key_time": 1769957951, "file_creation_time": 1769958007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 5259 microseconds, and 1697 cpu microseconds.
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.016285) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 847505 bytes OK
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.016315) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.017338) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.017351) EVENT_LOG_v1 {"time_micros": 1769958007017347, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.017363) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 851345, prev total WAL file size 851345, number of live WAL files 2.
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.017595) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(827KB)], [23(6948KB)]
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958007017659, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7962439, "oldest_snapshot_seqno": -1}
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3295 keys, 6147633 bytes, temperature: kUnknown
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958007040538, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6147633, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6123704, "index_size": 14604, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 79842, "raw_average_key_size": 24, "raw_value_size": 6062162, "raw_average_value_size": 1839, "num_data_blocks": 638, "num_entries": 3295, "num_filter_entries": 3295, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769958007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.040705) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6147633 bytes
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.041891) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 347.2 rd, 268.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 6.8 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(16.6) write-amplify(7.3) OK, records in: 3809, records dropped: 514 output_compression: NoCompression
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.041907) EVENT_LOG_v1 {"time_micros": 1769958007041900, "job": 8, "event": "compaction_finished", "compaction_time_micros": 22935, "compaction_time_cpu_micros": 9087, "output_level": 6, "num_output_files": 1, "total_output_size": 6147633, "num_input_records": 3809, "num_output_records": 3295, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958007042110, "job": 8, "event": "table_file_deletion", "file_number": 25}
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958007042647, "job": 8, "event": "table_file_deletion", "file_number": 23}
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.017558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.042720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.042724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.042726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.042728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:00:07 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:00:07.042730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:00:07 compute-0 python3.9[155150]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.759 154901 INFO neutron.common.config [-] Logging enabled!
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.759 154901 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.759 154901 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.760 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.760 154901 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.760 154901 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.760 154901 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.760 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.760 154901 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.760 154901 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.761 154901 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.761 154901 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.761 154901 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.761 154901 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.761 154901 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.761 154901 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.761 154901 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.761 154901 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.761 154901 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.762 154901 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.762 154901 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.762 154901 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.762 154901 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.762 154901 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.762 154901 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.762 154901 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.762 154901 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.762 154901 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.762 154901 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.763 154901 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.763 154901 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.763 154901 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.763 154901 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.763 154901 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.763 154901 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.763 154901 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.763 154901 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.763 154901 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.764 154901 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.764 154901 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.764 154901 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.764 154901 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.764 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.764 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.764 154901 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.764 154901 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.764 154901 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.764 154901 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.764 154901 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.765 154901 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.765 154901 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.765 154901 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.765 154901 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.765 154901 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.765 154901 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.765 154901 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.765 154901 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.765 154901 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.765 154901 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.766 154901 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.766 154901 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.766 154901 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.766 154901 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.766 154901 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.766 154901 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.766 154901 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.766 154901 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.766 154901 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.767 154901 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.767 154901 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.767 154901 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.767 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.767 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.767 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.767 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.767 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.767 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.767 154901 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.768 154901 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.768 154901 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.768 154901 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.768 154901 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.768 154901 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.768 154901 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.768 154901 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.768 154901 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.768 154901 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.768 154901 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.769 154901 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.769 154901 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.769 154901 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.769 154901 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.769 154901 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.769 154901 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.769 154901 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.769 154901 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.769 154901 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.769 154901 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.769 154901 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.770 154901 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.770 154901 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.770 154901 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.770 154901 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.770 154901 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.770 154901 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.770 154901 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.770 154901 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.770 154901 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.770 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.771 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.771 154901 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.771 154901 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.771 154901 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.771 154901 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.771 154901 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.771 154901 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.771 154901 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.772 154901 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.772 154901 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.772 154901 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.772 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.772 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.772 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.772 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.772 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.772 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.773 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.773 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.773 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.773 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.773 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.773 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.773 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.773 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.773 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.774 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.774 154901 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.774 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.774 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.774 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.774 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.774 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.774 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.774 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.774 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.775 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.775 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.775 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.775 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.775 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.775 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.775 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.775 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.775 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.775 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.776 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.776 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.776 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.776 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.776 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.776 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.776 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.776 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.776 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.776 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.777 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.777 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.777 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.777 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.777 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.777 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.777 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.777 154901 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.777 154901 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.777 154901 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.778 154901 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.778 154901 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.778 154901 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.778 154901 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.778 154901 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.778 154901 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.778 154901 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.778 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.778 154901 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.778 154901 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.779 154901 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.779 154901 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.779 154901 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.779 154901 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.779 154901 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.779 154901 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.779 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.779 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.779 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.779 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.780 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.780 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.780 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.780 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.780 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.780 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.780 154901 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.780 154901 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.780 154901 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.780 154901 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.781 154901 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.781 154901 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.781 154901 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.781 154901 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.781 154901 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.781 154901 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.781 154901 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.781 154901 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.781 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.781 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.782 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.782 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.782 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.782 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.782 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.782 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.782 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.782 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.782 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.782 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.783 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.783 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.783 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.783 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.783 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.783 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.783 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.783 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.783 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.783 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.784 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.784 154901 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.784 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.784 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.784 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.784 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.784 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.784 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.784 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.784 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.785 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.785 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.785 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.785 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.785 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.785 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.785 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.785 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.785 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.786 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.786 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.786 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.786 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.786 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.786 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.786 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.786 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.786 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.786 154901 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.787 154901 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.787 154901 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.787 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.787 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.787 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.787 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.787 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.787 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.787 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.787 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.788 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.788 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.788 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.788 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.788 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.788 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.788 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.788 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.788 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.789 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.789 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.789 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.789 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.789 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.789 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.789 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.789 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.789 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.790 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.790 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.790 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.790 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.790 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.790 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.790 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.790 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.790 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.790 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.791 154901 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.791 154901 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.842 154901 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.842 154901 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.842 154901 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.842 154901 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.842 154901 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.854 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name c3bd6005-873a-4620-bb39-624ed33e90e2 (UUID: c3bd6005-873a-4620-bb39-624ed33e90e2) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.884 154901 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.884 154901 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.884 154901 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.885 154901 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.887 154901 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.893 154901 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.899 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'c3bd6005-873a-4620-bb39-624ed33e90e2'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fb6bbf84820>], external_ids={}, name=c3bd6005-873a-4620-bb39-624ed33e90e2, nb_cfg_timestamp=1769957954302, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.900 154901 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fb6bbf84fd0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.901 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.901 154901 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.902 154901 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.902 154901 INFO oslo_service.service [-] Starting 1 workers
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.905 154901 DEBUG oslo_service.service [-] Started child 155182 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.908 155182 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-8290261'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.908 154901 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp6yvx35yo/privsep.sock']
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.926 155182 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.927 155182 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.927 155182 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 01 15:00:07 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.930 155182 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.935 155182 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Feb 01 15:00:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:07.941 155182 INFO eventlet.wsgi.server [-] (155182) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Feb 01 15:00:08 compute-0 sudo[155312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmlnnjgqbwopfnakcjlouaaysanbockh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958007.929342-484-220338939120569/AnsiballZ_stat.py'
Feb 01 15:00:08 compute-0 sudo[155312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:08 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Feb 01 15:00:08 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:08.499 154901 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 01 15:00:08 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:08.500 154901 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp6yvx35yo/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 01 15:00:08 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:08.414 155315 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 01 15:00:08 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:08.419 155315 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 01 15:00:08 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:08.422 155315 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Feb 01 15:00:08 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:08.423 155315 INFO oslo.privsep.daemon [-] privsep daemon running as pid 155315
Feb 01 15:00:08 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:08.503 155315 DEBUG oslo.privsep.daemon [-] privsep: reply[9ffbecc0-9c75-4272-8029-3823b1d72e8a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 01 15:00:08 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:08.908 155315 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:00:08 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:08.908 155315 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:00:08 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:08.909 155315 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:00:09 compute-0 python3.9[155314]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:00:09 compute-0 sudo[155312]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:09 compute-0 ceph-mon[75179]: pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.359 155315 DEBUG oslo.privsep.daemon [-] privsep: reply[d7eb1d28-7468-44a1-9cb1-4813a8fde834]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.362 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, column=external_ids, values=({'neutron:ovn-metadata-id': 'a7cfbf75-618c-52b8-b548-605f3c91bcbe'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.371 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.377 154901 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.378 154901 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.378 154901 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.378 154901 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.378 154901 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.378 154901 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.378 154901 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.379 154901 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.379 154901 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.379 154901 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.379 154901 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.379 154901 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.379 154901 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.379 154901 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.379 154901 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.380 154901 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.380 154901 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.380 154901 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.380 154901 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.380 154901 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.380 154901 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.380 154901 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.380 154901 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.380 154901 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.381 154901 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.381 154901 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.381 154901 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.381 154901 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.381 154901 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.381 154901 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.381 154901 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.381 154901 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.381 154901 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.382 154901 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.382 154901 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.382 154901 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.382 154901 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.382 154901 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.382 154901 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.382 154901 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.383 154901 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.383 154901 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.383 154901 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.383 154901 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.383 154901 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.383 154901 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.383 154901 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.383 154901 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.383 154901 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.384 154901 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.384 154901 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.384 154901 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.384 154901 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.384 154901 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.384 154901 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.384 154901 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.384 154901 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.384 154901 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.384 154901 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.385 154901 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.385 154901 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.385 154901 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.385 154901 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.385 154901 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.385 154901 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.385 154901 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.385 154901 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.385 154901 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.385 154901 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.386 154901 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.386 154901 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.386 154901 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.386 154901 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.386 154901 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.386 154901 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.386 154901 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.386 154901 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.386 154901 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.387 154901 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.387 154901 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.387 154901 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.387 154901 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.387 154901 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.387 154901 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.387 154901 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.387 154901 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.387 154901 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.387 154901 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.387 154901 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.388 154901 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.388 154901 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.388 154901 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.388 154901 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.388 154901 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.388 154901 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.388 154901 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.388 154901 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.388 154901 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.389 154901 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.389 154901 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.389 154901 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.389 154901 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.389 154901 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.389 154901 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.389 154901 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.389 154901 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.389 154901 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.389 154901 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.390 154901 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.390 154901 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.390 154901 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.390 154901 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.390 154901 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.390 154901 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.390 154901 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.390 154901 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.390 154901 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.391 154901 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.391 154901 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.391 154901 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.391 154901 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.391 154901 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.391 154901 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.391 154901 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.391 154901 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.392 154901 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.392 154901 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.392 154901 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.392 154901 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.392 154901 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.392 154901 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.392 154901 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.392 154901 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.392 154901 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.393 154901 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.393 154901 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.393 154901 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.393 154901 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.393 154901 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.393 154901 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.393 154901 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.393 154901 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.394 154901 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.394 154901 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.394 154901 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.394 154901 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.394 154901 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.394 154901 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.394 154901 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.394 154901 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.395 154901 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.395 154901 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.395 154901 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.395 154901 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.395 154901 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.395 154901 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.395 154901 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.395 154901 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.396 154901 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.396 154901 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.396 154901 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.396 154901 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.396 154901 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.396 154901 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.396 154901 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.397 154901 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.397 154901 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.397 154901 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.397 154901 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.397 154901 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.397 154901 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.397 154901 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.398 154901 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.398 154901 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.398 154901 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.398 154901 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.398 154901 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.398 154901 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.398 154901 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.398 154901 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.398 154901 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.399 154901 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.399 154901 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.399 154901 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.399 154901 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.399 154901 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.399 154901 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.400 154901 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.400 154901 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.400 154901 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.400 154901 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.400 154901 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.400 154901 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.400 154901 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.401 154901 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.401 154901 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.401 154901 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.401 154901 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.401 154901 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.401 154901 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.401 154901 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.401 154901 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.401 154901 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.402 154901 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.402 154901 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.402 154901 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.402 154901 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.402 154901 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.402 154901 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.402 154901 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.402 154901 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.402 154901 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.403 154901 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.403 154901 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.403 154901 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.403 154901 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.403 154901 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.403 154901 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.403 154901 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.403 154901 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.403 154901 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.403 154901 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.404 154901 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.404 154901 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.404 154901 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.404 154901 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.404 154901 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.404 154901 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.404 154901 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.404 154901 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.404 154901 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.404 154901 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.405 154901 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.405 154901 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.405 154901 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.405 154901 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.405 154901 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.405 154901 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.405 154901 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.405 154901 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.405 154901 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.405 154901 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.406 154901 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.406 154901 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.406 154901 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.406 154901 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.406 154901 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.406 154901 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.406 154901 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.406 154901 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.406 154901 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.407 154901 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.407 154901 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.407 154901 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.407 154901 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.407 154901 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.407 154901 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.407 154901 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.407 154901 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.407 154901 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.407 154901 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.408 154901 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.408 154901 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.408 154901 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.408 154901 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.408 154901 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.408 154901 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.408 154901 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.408 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.408 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.409 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.409 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.409 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.409 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.409 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.409 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.409 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.409 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.409 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.409 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.410 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.410 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.410 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.410 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.410 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.410 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.410 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.410 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.410 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.411 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.411 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.411 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.411 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.411 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.411 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.411 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.411 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.411 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.412 154901 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.412 154901 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.412 154901 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.412 154901 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.412 154901 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:00:09 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:00:09.412 154901 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 01 15:00:09 compute-0 sudo[155442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnodzqjgqsumfnwbmivsaxrrobqfcsvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958007.929342-484-220338939120569/AnsiballZ_copy.py'
Feb 01 15:00:09 compute-0 sudo[155442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:09 compute-0 python3.9[155444]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769958007.929342-484-220338939120569/.source.yaml _original_basename=.43efzx1r follow=False checksum=85d5f776cfd8fbfbcc86699b9b1dc89afe8e4b0a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:00:09 compute-0 sudo[155442]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:09 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:09 compute-0 sshd-session[145918]: Connection closed by 192.168.122.30 port 42566
Feb 01 15:00:09 compute-0 sshd-session[145915]: pam_unix(sshd:session): session closed for user zuul
Feb 01 15:00:09 compute-0 systemd-logind[786]: Session 47 logged out. Waiting for processes to exit.
Feb 01 15:00:09 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Feb 01 15:00:09 compute-0 systemd[1]: session-47.scope: Consumed 47.335s CPU time.
Feb 01 15:00:10 compute-0 systemd-logind[786]: Removed session 47.
Feb 01 15:00:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:00:11 compute-0 ceph-mon[75179]: pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:11 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:12 compute-0 ceph-mon[75179]: pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:13 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:14 compute-0 ceph-mon[75179]: pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:15 compute-0 sshd-session[155469]: Accepted publickey for zuul from 192.168.122.30 port 53876 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 15:00:15 compute-0 systemd-logind[786]: New session 48 of user zuul.
Feb 01 15:00:15 compute-0 systemd[1]: Started Session 48 of User zuul.
Feb 01 15:00:15 compute-0 sshd-session[155469]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 15:00:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:00:15 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:16 compute-0 python3.9[155622]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 15:00:16 compute-0 sudo[155776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fipcuucdquguzbnkqyazdvnxhkjxlwmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958016.614849-29-242950489662935/AnsiballZ_command.py'
Feb 01 15:00:16 compute-0 sudo[155776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:17 compute-0 ceph-mon[75179]: pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:17 compute-0 python3.9[155778]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:00:17 compute-0 sudo[155776]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:00:17
Feb 01 15:00:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:00:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:00:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'backups', 'vms', 'default.rgw.meta', 'images', '.rgw.root', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta']
Feb 01 15:00:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:00:17 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:17 compute-0 sudo[155941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quuqdfftlrlccmyonzwzfpadgujrbsnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958017.442755-40-97325802209341/AnsiballZ_systemd_service.py'
Feb 01 15:00:17 compute-0 sudo[155941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:18 compute-0 python3.9[155943]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 01 15:00:18 compute-0 systemd[1]: Reloading.
Feb 01 15:00:18 compute-0 systemd-rc-local-generator[155968]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:00:18 compute-0 systemd-sysv-generator[155973]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:00:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:00:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:00:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:00:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:00:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:00:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:00:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:00:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:00:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:00:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:00:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:00:18 compute-0 sudo[155941]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:00:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:00:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:00:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:00:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:00:19 compute-0 ceph-mon[75179]: pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:19 compute-0 python3.9[156128]: ansible-ansible.builtin.service_facts Invoked
Feb 01 15:00:19 compute-0 network[156145]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 01 15:00:19 compute-0 network[156146]: 'network-scripts' will be removed from distribution in near future.
Feb 01 15:00:19 compute-0 network[156147]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 01 15:00:19 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:00:21 compute-0 ceph-mon[75179]: pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:21 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:23 compute-0 ceph-mon[75179]: pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:23 compute-0 sudo[156407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qedjxklsalxwzmaagmztpkqzpnrchlsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958023.113505-59-181759141479656/AnsiballZ_systemd_service.py'
Feb 01 15:00:23 compute-0 sudo[156407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:23 compute-0 python3.9[156409]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 15:00:23 compute-0 sudo[156407]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:23 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:24 compute-0 sudo[156560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atudiqilzttmomgzsbtuzuiwrzvplwsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958023.8992946-59-190922188466771/AnsiballZ_systemd_service.py'
Feb 01 15:00:24 compute-0 sudo[156560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:24 compute-0 python3.9[156562]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 15:00:24 compute-0 sudo[156560]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:24 compute-0 sudo[156713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgzgjpjyjsbckjmzkousuejuejwbxpaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958024.6658177-59-17970966802036/AnsiballZ_systemd_service.py'
Feb 01 15:00:24 compute-0 sudo[156713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:25 compute-0 ceph-mon[75179]: pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:25 compute-0 python3.9[156715]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 15:00:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:00:25 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:26 compute-0 sudo[156713]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:26 compute-0 sudo[156866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osnqpaqxwfijtqhdwcoobfftubkzpvtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958026.404567-59-154051189555828/AnsiballZ_systemd_service.py'
Feb 01 15:00:26 compute-0 sudo[156866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:26 compute-0 python3.9[156868]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 15:00:27 compute-0 sudo[156866]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:27 compute-0 ceph-mon[75179]: pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:27 compute-0 sudo[157019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euwnmbokkgnikuslwgnklzsjqwsexosg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958027.360037-59-37455772598224/AnsiballZ_systemd_service.py'
Feb 01 15:00:27 compute-0 sudo[157019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:27 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:27 compute-0 python3.9[157021]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 15:00:27 compute-0 sudo[157019]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:00:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:00:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:00:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:00:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:00:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:00:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:00:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:00:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:00:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:00:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:00:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:00:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb 01 15:00:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:00:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:00:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:00:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:00:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:00:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:00:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:00:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:00:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:00:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:00:28 compute-0 sudo[157172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofpeewkbiiaytiuwrugneiqtkqdruxjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958028.1008315-59-92576369217422/AnsiballZ_systemd_service.py'
Feb 01 15:00:28 compute-0 sudo[157172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:28 compute-0 python3.9[157174]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 15:00:28 compute-0 sudo[157172]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:29 compute-0 ceph-mon[75179]: pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:29 compute-0 sudo[157325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhouxqcamlzrduylypapmnrduotjnjjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958028.875585-59-224043887047663/AnsiballZ_systemd_service.py'
Feb 01 15:00:29 compute-0 sudo[157325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:29 compute-0 python3.9[157327]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 15:00:29 compute-0 sudo[157325]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:29 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:30 compute-0 sudo[157478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnslptimxkjueysledymxnyxqltsvibh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958029.901129-111-181600050617639/AnsiballZ_file.py'
Feb 01 15:00:30 compute-0 sudo[157478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:00:30 compute-0 python3.9[157480]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:00:30 compute-0 sudo[157478]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:30 compute-0 sudo[157630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzlnwbnvhofazurvgdbjgrtocmdwgydt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958030.657756-111-64555684264841/AnsiballZ_file.py'
Feb 01 15:00:30 compute-0 sudo[157630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:31 compute-0 python3.9[157632]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:00:31 compute-0 sudo[157630]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:31 compute-0 ceph-mon[75179]: pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:31 compute-0 sudo[157782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifbdaohvqasmulanyhccwwogbrjogsbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958031.3026206-111-8458613389947/AnsiballZ_file.py'
Feb 01 15:00:31 compute-0 sudo[157782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:31 compute-0 python3.9[157784]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:00:31 compute-0 sudo[157782]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:31 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:32 compute-0 sudo[157934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uehfzhjlrmivudahlkzgyoylzydfmkhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958031.8736646-111-202043740593507/AnsiballZ_file.py'
Feb 01 15:00:32 compute-0 sudo[157934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:32 compute-0 python3.9[157936]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:00:32 compute-0 sudo[157934]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:32 compute-0 sudo[158060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:00:32 compute-0 sudo[158060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:00:32 compute-0 sudo[158060]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:32 compute-0 sudo[158112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxtefzozqmsykxwjodxihljfycaxzcvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958032.553808-111-260430933362847/AnsiballZ_file.py'
Feb 01 15:00:32 compute-0 sudo[158112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:32 compute-0 sudo[158111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 01 15:00:32 compute-0 sudo[158111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:00:32 compute-0 python3.9[158133]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:00:32 compute-0 sudo[158112]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:33 compute-0 sudo[158111]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:33 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:00:33 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:00:33 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:00:33 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:00:33 compute-0 sudo[158207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:00:33 compute-0 sudo[158207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:00:33 compute-0 sudo[158207]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:33 compute-0 ceph-mon[75179]: pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:33 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:00:33 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:00:33 compute-0 sudo[158261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 15:00:33 compute-0 sudo[158261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:00:33 compute-0 sudo[158359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cssoybiubfkeducwhzrqdxvqwakngrdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958033.1134439-111-258348053802195/AnsiballZ_file.py'
Feb 01 15:00:33 compute-0 sudo[158359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:33 compute-0 python3.9[158361]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:00:33 compute-0 sudo[158359]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:33 compute-0 sudo[158261]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:33 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:00:33 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:00:33 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 15:00:33 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:00:33 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 15:00:33 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:00:33 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 15:00:33 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:00:33 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 15:00:33 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:00:33 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:00:33 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:00:33 compute-0 sudo[158440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:00:33 compute-0 sudo[158440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:00:33 compute-0 sudo[158440]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:33 compute-0 sudo[158494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 15:00:33 compute-0 sudo[158494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:00:33 compute-0 sudo[158592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwxnbdzrpymbtacyudgzdupebqoljhyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958033.6455212-111-262606154697720/AnsiballZ_file.py'
Feb 01 15:00:33 compute-0 sudo[158592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:33 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:34 compute-0 podman[158608]: 2026-02-01 15:00:34.039624067 +0000 UTC m=+0.054467301 container create 116f9cfcb6fb56c350e02b9f7a385160299c66ff7f0743ef35dcb9d382ceee8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_nash, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:00:34 compute-0 python3.9[158594]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:00:34 compute-0 systemd[1]: Started libpod-conmon-116f9cfcb6fb56c350e02b9f7a385160299c66ff7f0743ef35dcb9d382ceee8b.scope.
Feb 01 15:00:34 compute-0 sudo[158592]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:34 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:00:34 compute-0 podman[158608]: 2026-02-01 15:00:34.014769349 +0000 UTC m=+0.029612593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:00:34 compute-0 podman[158608]: 2026-02-01 15:00:34.118902592 +0000 UTC m=+0.133745836 container init 116f9cfcb6fb56c350e02b9f7a385160299c66ff7f0743ef35dcb9d382ceee8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 01 15:00:34 compute-0 podman[158608]: 2026-02-01 15:00:34.125521778 +0000 UTC m=+0.140365012 container start 116f9cfcb6fb56c350e02b9f7a385160299c66ff7f0743ef35dcb9d382ceee8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_nash, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:00:34 compute-0 podman[158608]: 2026-02-01 15:00:34.131810195 +0000 UTC m=+0.146653399 container attach 116f9cfcb6fb56c350e02b9f7a385160299c66ff7f0743ef35dcb9d382ceee8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_nash, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:00:34 compute-0 xenodochial_nash[158625]: 167 167
Feb 01 15:00:34 compute-0 systemd[1]: libpod-116f9cfcb6fb56c350e02b9f7a385160299c66ff7f0743ef35dcb9d382ceee8b.scope: Deactivated successfully.
Feb 01 15:00:34 compute-0 podman[158608]: 2026-02-01 15:00:34.141470656 +0000 UTC m=+0.156313850 container died 116f9cfcb6fb56c350e02b9f7a385160299c66ff7f0743ef35dcb9d382ceee8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_nash, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:00:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f910c801279fa53bf3013b5843ac74f5d5d739b9bf86e77d354aa8ba8e19ad3-merged.mount: Deactivated successfully.
Feb 01 15:00:34 compute-0 podman[158608]: 2026-02-01 15:00:34.184838064 +0000 UTC m=+0.199681268 container remove 116f9cfcb6fb56c350e02b9f7a385160299c66ff7f0743ef35dcb9d382ceee8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_nash, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:00:34 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:00:34 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:00:34 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:00:34 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:00:34 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:00:34 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:00:34 compute-0 systemd[1]: libpod-conmon-116f9cfcb6fb56c350e02b9f7a385160299c66ff7f0743ef35dcb9d382ceee8b.scope: Deactivated successfully.
Feb 01 15:00:34 compute-0 podman[158706]: 2026-02-01 15:00:34.353737406 +0000 UTC m=+0.045890550 container create c08708bc132cfa55ea38d39468c2766beb086a2a1f9c1e67de214ff317d8032e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 01 15:00:34 compute-0 systemd[1]: Started libpod-conmon-c08708bc132cfa55ea38d39468c2766beb086a2a1f9c1e67de214ff317d8032e.scope.
Feb 01 15:00:34 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:00:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2420a2902fb82e5215d3fc0505e8c3a27297ed2f04ec9cb6f439afb2f888448b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:00:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2420a2902fb82e5215d3fc0505e8c3a27297ed2f04ec9cb6f439afb2f888448b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:00:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2420a2902fb82e5215d3fc0505e8c3a27297ed2f04ec9cb6f439afb2f888448b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:00:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2420a2902fb82e5215d3fc0505e8c3a27297ed2f04ec9cb6f439afb2f888448b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:00:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2420a2902fb82e5215d3fc0505e8c3a27297ed2f04ec9cb6f439afb2f888448b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 15:00:34 compute-0 podman[158706]: 2026-02-01 15:00:34.334516926 +0000 UTC m=+0.026670050 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:00:34 compute-0 podman[158706]: 2026-02-01 15:00:34.444553885 +0000 UTC m=+0.136706999 container init c08708bc132cfa55ea38d39468c2766beb086a2a1f9c1e67de214ff317d8032e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jemison, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 01 15:00:34 compute-0 podman[158706]: 2026-02-01 15:00:34.452511059 +0000 UTC m=+0.144664163 container start c08708bc132cfa55ea38d39468c2766beb086a2a1f9c1e67de214ff317d8032e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jemison, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 01 15:00:34 compute-0 podman[158706]: 2026-02-01 15:00:34.457156949 +0000 UTC m=+0.149310053 container attach c08708bc132cfa55ea38d39468c2766beb086a2a1f9c1e67de214ff317d8032e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 01 15:00:34 compute-0 sudo[158817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skevjxkqvcaabvrphcasspojugqbhnqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958034.2658296-161-224962532288083/AnsiballZ_file.py'
Feb 01 15:00:34 compute-0 sudo[158817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:34 compute-0 python3.9[158819]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:00:34 compute-0 sudo[158817]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:34 compute-0 relaxed_jemison[158762]: --> passed data devices: 0 physical, 3 LVM
Feb 01 15:00:34 compute-0 relaxed_jemison[158762]: --> All data devices are unavailable
Feb 01 15:00:34 compute-0 systemd[1]: libpod-c08708bc132cfa55ea38d39468c2766beb086a2a1f9c1e67de214ff317d8032e.scope: Deactivated successfully.
Feb 01 15:00:34 compute-0 podman[158706]: 2026-02-01 15:00:34.954001108 +0000 UTC m=+0.646154222 container died c08708bc132cfa55ea38d39468c2766beb086a2a1f9c1e67de214ff317d8032e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 01 15:00:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-2420a2902fb82e5215d3fc0505e8c3a27297ed2f04ec9cb6f439afb2f888448b-merged.mount: Deactivated successfully.
Feb 01 15:00:34 compute-0 podman[158706]: 2026-02-01 15:00:34.997700995 +0000 UTC m=+0.689854109 container remove c08708bc132cfa55ea38d39468c2766beb086a2a1f9c1e67de214ff317d8032e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 01 15:00:35 compute-0 systemd[1]: libpod-conmon-c08708bc132cfa55ea38d39468c2766beb086a2a1f9c1e67de214ff317d8032e.scope: Deactivated successfully.
Feb 01 15:00:35 compute-0 sudo[158494]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:35 compute-0 sudo[158969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:00:35 compute-0 sudo[158969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:00:35 compute-0 sudo[158969]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:35 compute-0 sudo[159019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlhvxaugcivfpseleaqkvfgqncgbhjvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958034.8742292-161-149300679338036/AnsiballZ_file.py'
Feb 01 15:00:35 compute-0 sudo[159019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:35 compute-0 sudo[159022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 15:00:35 compute-0 sudo[159022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:00:35 compute-0 ceph-mon[75179]: pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:35 compute-0 python3.9[159025]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:00:35 compute-0 sudo[159019]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:35 compute-0 podman[159068]: 2026-02-01 15:00:35.373717172 +0000 UTC m=+0.039809348 container create 9c2f7ffa0892e83ac07b506ab917b7ed4e6bc3c8c10df83cc2f8dc6a192edd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hopper, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:00:35 compute-0 systemd[1]: Started libpod-conmon-9c2f7ffa0892e83ac07b506ab917b7ed4e6bc3c8c10df83cc2f8dc6a192edd6a.scope.
Feb 01 15:00:35 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:00:35 compute-0 podman[159068]: 2026-02-01 15:00:35.427545094 +0000 UTC m=+0.093637340 container init 9c2f7ffa0892e83ac07b506ab917b7ed4e6bc3c8c10df83cc2f8dc6a192edd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 01 15:00:35 compute-0 podman[159068]: 2026-02-01 15:00:35.431997409 +0000 UTC m=+0.098089565 container start 9c2f7ffa0892e83ac07b506ab917b7ed4e6bc3c8c10df83cc2f8dc6a192edd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hopper, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 01 15:00:35 compute-0 vibrant_hopper[159124]: 167 167
Feb 01 15:00:35 compute-0 systemd[1]: libpod-9c2f7ffa0892e83ac07b506ab917b7ed4e6bc3c8c10df83cc2f8dc6a192edd6a.scope: Deactivated successfully.
Feb 01 15:00:35 compute-0 conmon[159124]: conmon 9c2f7ffa0892e83ac07b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9c2f7ffa0892e83ac07b506ab917b7ed4e6bc3c8c10df83cc2f8dc6a192edd6a.scope/container/memory.events
Feb 01 15:00:35 compute-0 podman[159068]: 2026-02-01 15:00:35.43525596 +0000 UTC m=+0.101348206 container attach 9c2f7ffa0892e83ac07b506ab917b7ed4e6bc3c8c10df83cc2f8dc6a192edd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hopper, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 01 15:00:35 compute-0 podman[159068]: 2026-02-01 15:00:35.435628881 +0000 UTC m=+0.101721077 container died 9c2f7ffa0892e83ac07b506ab917b7ed4e6bc3c8c10df83cc2f8dc6a192edd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hopper, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:00:35 compute-0 podman[159068]: 2026-02-01 15:00:35.353274128 +0000 UTC m=+0.019366364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:00:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-24922403da10291fd441336338c607e23e2319b4062d3cbc9f91338bc22f3430-merged.mount: Deactivated successfully.
Feb 01 15:00:35 compute-0 podman[159068]: 2026-02-01 15:00:35.471188219 +0000 UTC m=+0.137280405 container remove 9c2f7ffa0892e83ac07b506ab917b7ed4e6bc3c8c10df83cc2f8dc6a192edd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 01 15:00:35 compute-0 systemd[1]: libpod-conmon-9c2f7ffa0892e83ac07b506ab917b7ed4e6bc3c8c10df83cc2f8dc6a192edd6a.scope: Deactivated successfully.
Feb 01 15:00:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:00:35 compute-0 podman[159200]: 2026-02-01 15:00:35.599957184 +0000 UTC m=+0.039017836 container create 49054cfec002e8be9377a6e5b1759351eeefd20c54180c1554cb54a49f09beca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Feb 01 15:00:35 compute-0 systemd[1]: Started libpod-conmon-49054cfec002e8be9377a6e5b1759351eeefd20c54180c1554cb54a49f09beca.scope.
Feb 01 15:00:35 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:00:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e9d569f25c8dd265cd84844812fbd349e299b529af9dc43769c6f7fba370b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:00:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e9d569f25c8dd265cd84844812fbd349e299b529af9dc43769c6f7fba370b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:00:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e9d569f25c8dd265cd84844812fbd349e299b529af9dc43769c6f7fba370b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:00:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e9d569f25c8dd265cd84844812fbd349e299b529af9dc43769c6f7fba370b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:00:35 compute-0 podman[159200]: 2026-02-01 15:00:35.681278726 +0000 UTC m=+0.120339418 container init 49054cfec002e8be9377a6e5b1759351eeefd20c54180c1554cb54a49f09beca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_cori, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:00:35 compute-0 podman[159200]: 2026-02-01 15:00:35.586014833 +0000 UTC m=+0.025075495 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:00:35 compute-0 podman[159200]: 2026-02-01 15:00:35.686240456 +0000 UTC m=+0.125301108 container start 49054cfec002e8be9377a6e5b1759351eeefd20c54180c1554cb54a49f09beca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_cori, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:00:35 compute-0 sudo[159269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfoyoeckytnqdinujaswvrtodujcwqtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958035.410726-161-131956976570747/AnsiballZ_file.py'
Feb 01 15:00:35 compute-0 podman[159200]: 2026-02-01 15:00:35.689346073 +0000 UTC m=+0.128406735 container attach 49054cfec002e8be9377a6e5b1759351eeefd20c54180c1554cb54a49f09beca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 01 15:00:35 compute-0 sudo[159269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:35 compute-0 python3.9[159273]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:00:35 compute-0 sudo[159269]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:35 compute-0 amazing_cori[159240]: {
Feb 01 15:00:35 compute-0 amazing_cori[159240]:     "0": [
Feb 01 15:00:35 compute-0 amazing_cori[159240]:         {
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "devices": [
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "/dev/loop3"
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             ],
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "lv_name": "ceph_lv0",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "lv_size": "21470642176",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "name": "ceph_lv0",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "tags": {
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.cluster_name": "ceph",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.crush_device_class": "",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.encrypted": "0",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.objectstore": "bluestore",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.osd_id": "0",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.type": "block",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.vdo": "0",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.with_tpm": "0"
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             },
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "type": "block",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "vg_name": "ceph_vg0"
Feb 01 15:00:35 compute-0 amazing_cori[159240]:         }
Feb 01 15:00:35 compute-0 amazing_cori[159240]:     ],
Feb 01 15:00:35 compute-0 amazing_cori[159240]:     "1": [
Feb 01 15:00:35 compute-0 amazing_cori[159240]:         {
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "devices": [
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "/dev/loop4"
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             ],
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "lv_name": "ceph_lv1",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "lv_size": "21470642176",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "name": "ceph_lv1",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "tags": {
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.cluster_name": "ceph",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.crush_device_class": "",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.encrypted": "0",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.objectstore": "bluestore",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.osd_id": "1",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.type": "block",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.vdo": "0",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.with_tpm": "0"
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             },
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "type": "block",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "vg_name": "ceph_vg1"
Feb 01 15:00:35 compute-0 amazing_cori[159240]:         }
Feb 01 15:00:35 compute-0 amazing_cori[159240]:     ],
Feb 01 15:00:35 compute-0 amazing_cori[159240]:     "2": [
Feb 01 15:00:35 compute-0 amazing_cori[159240]:         {
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "devices": [
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "/dev/loop5"
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             ],
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "lv_name": "ceph_lv2",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "lv_size": "21470642176",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "name": "ceph_lv2",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "tags": {
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.cluster_name": "ceph",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.crush_device_class": "",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.encrypted": "0",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.objectstore": "bluestore",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.osd_id": "2",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.type": "block",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.vdo": "0",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:                 "ceph.with_tpm": "0"
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             },
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "type": "block",
Feb 01 15:00:35 compute-0 amazing_cori[159240]:             "vg_name": "ceph_vg2"
Feb 01 15:00:35 compute-0 amazing_cori[159240]:         }
Feb 01 15:00:35 compute-0 amazing_cori[159240]:     ]
Feb 01 15:00:35 compute-0 amazing_cori[159240]: }
Feb 01 15:00:35 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:35 compute-0 systemd[1]: libpod-49054cfec002e8be9377a6e5b1759351eeefd20c54180c1554cb54a49f09beca.scope: Deactivated successfully.
Feb 01 15:00:35 compute-0 podman[159200]: 2026-02-01 15:00:35.952046729 +0000 UTC m=+0.391107411 container died 49054cfec002e8be9377a6e5b1759351eeefd20c54180c1554cb54a49f09beca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:00:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6e9d569f25c8dd265cd84844812fbd349e299b529af9dc43769c6f7fba370b6-merged.mount: Deactivated successfully.
Feb 01 15:00:36 compute-0 podman[159200]: 2026-02-01 15:00:36.00197792 +0000 UTC m=+0.441038602 container remove 49054cfec002e8be9377a6e5b1759351eeefd20c54180c1554cb54a49f09beca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 01 15:00:36 compute-0 systemd[1]: libpod-conmon-49054cfec002e8be9377a6e5b1759351eeefd20c54180c1554cb54a49f09beca.scope: Deactivated successfully.
Feb 01 15:00:36 compute-0 sudo[159022]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:36 compute-0 sudo[159366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:00:36 compute-0 sudo[159366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:00:36 compute-0 sudo[159366]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:36 compute-0 sudo[159414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 15:00:36 compute-0 sudo[159414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:00:36 compute-0 sudo[159509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxoioyykvrbycysvvxucuusaqxutvpne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958036.004367-161-199391639165596/AnsiballZ_file.py'
Feb 01 15:00:36 compute-0 sudo[159509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:36 compute-0 podman[159463]: 2026-02-01 15:00:36.344152057 +0000 UTC m=+0.073665949 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 01 15:00:36 compute-0 podman[159464]: 2026-02-01 15:00:36.385799816 +0000 UTC m=+0.114900706 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 01 15:00:36 compute-0 podman[159546]: 2026-02-01 15:00:36.452150679 +0000 UTC m=+0.038308486 container create 5594313f80fb6f04215bae99bbfe3c256508dbc3bfb50c5b0fc1c0380afdbadb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:00:36 compute-0 systemd[1]: Started libpod-conmon-5594313f80fb6f04215bae99bbfe3c256508dbc3bfb50c5b0fc1c0380afdbadb.scope.
Feb 01 15:00:36 compute-0 python3.9[159520]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:00:36 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:00:36 compute-0 sudo[159509]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:36 compute-0 podman[159546]: 2026-02-01 15:00:36.519420338 +0000 UTC m=+0.105578145 container init 5594313f80fb6f04215bae99bbfe3c256508dbc3bfb50c5b0fc1c0380afdbadb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:00:36 compute-0 podman[159546]: 2026-02-01 15:00:36.525261222 +0000 UTC m=+0.111419039 container start 5594313f80fb6f04215bae99bbfe3c256508dbc3bfb50c5b0fc1c0380afdbadb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackwell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 01 15:00:36 compute-0 mystifying_blackwell[159563]: 167 167
Feb 01 15:00:36 compute-0 systemd[1]: libpod-5594313f80fb6f04215bae99bbfe3c256508dbc3bfb50c5b0fc1c0380afdbadb.scope: Deactivated successfully.
Feb 01 15:00:36 compute-0 podman[159546]: 2026-02-01 15:00:36.529741518 +0000 UTC m=+0.115899345 container attach 5594313f80fb6f04215bae99bbfe3c256508dbc3bfb50c5b0fc1c0380afdbadb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackwell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:00:36 compute-0 podman[159546]: 2026-02-01 15:00:36.530503669 +0000 UTC m=+0.116661476 container died 5594313f80fb6f04215bae99bbfe3c256508dbc3bfb50c5b0fc1c0380afdbadb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 01 15:00:36 compute-0 podman[159546]: 2026-02-01 15:00:36.436260693 +0000 UTC m=+0.022418520 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:00:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0b3dfdb02bfc409994dba55f5ab5bf0a5ac0fec53ab646260ff2c851b882fd3-merged.mount: Deactivated successfully.
Feb 01 15:00:36 compute-0 podman[159546]: 2026-02-01 15:00:36.562180538 +0000 UTC m=+0.148338365 container remove 5594313f80fb6f04215bae99bbfe3c256508dbc3bfb50c5b0fc1c0380afdbadb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackwell, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 01 15:00:36 compute-0 systemd[1]: libpod-conmon-5594313f80fb6f04215bae99bbfe3c256508dbc3bfb50c5b0fc1c0380afdbadb.scope: Deactivated successfully.
Feb 01 15:00:36 compute-0 podman[159639]: 2026-02-01 15:00:36.700487772 +0000 UTC m=+0.050461128 container create d19c41f2521fd697d142449f008f2052d688082e404e27a814e57851e9245645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 01 15:00:36 compute-0 systemd[1]: Started libpod-conmon-d19c41f2521fd697d142449f008f2052d688082e404e27a814e57851e9245645.scope.
Feb 01 15:00:36 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:00:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056cfdbd7ab0a3182be2157c6e5de3883c3543af0dc222675a18c4004cbad8b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:00:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056cfdbd7ab0a3182be2157c6e5de3883c3543af0dc222675a18c4004cbad8b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:00:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056cfdbd7ab0a3182be2157c6e5de3883c3543af0dc222675a18c4004cbad8b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:00:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056cfdbd7ab0a3182be2157c6e5de3883c3543af0dc222675a18c4004cbad8b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:00:36 compute-0 podman[159639]: 2026-02-01 15:00:36.680974874 +0000 UTC m=+0.030948220 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:00:36 compute-0 podman[159639]: 2026-02-01 15:00:36.789693666 +0000 UTC m=+0.139667012 container init d19c41f2521fd697d142449f008f2052d688082e404e27a814e57851e9245645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 01 15:00:36 compute-0 podman[159639]: 2026-02-01 15:00:36.796684152 +0000 UTC m=+0.146657478 container start d19c41f2521fd697d142449f008f2052d688082e404e27a814e57851e9245645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brahmagupta, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:00:36 compute-0 podman[159639]: 2026-02-01 15:00:36.800247152 +0000 UTC m=+0.150220498 container attach d19c41f2521fd697d142449f008f2052d688082e404e27a814e57851e9245645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brahmagupta, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:00:36 compute-0 sudo[159757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdtuykkwsfjambiwarwawfgblnqgiqfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958036.6186397-161-46946327131103/AnsiballZ_file.py'
Feb 01 15:00:36 compute-0 sudo[159757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:37 compute-0 python3.9[159759]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:00:37 compute-0 sudo[159757]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:37 compute-0 ceph-mon[75179]: pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:37 compute-0 sudo[159979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzrpzigzwmwxemqjrdjbvzclnvmvhjuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958037.163274-161-97626201761945/AnsiballZ_file.py'
Feb 01 15:00:37 compute-0 sudo[159979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:37 compute-0 lvm[159984]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:00:37 compute-0 lvm[159984]: VG ceph_vg0 finished
Feb 01 15:00:37 compute-0 lvm[159987]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:00:37 compute-0 lvm[159987]: VG ceph_vg1 finished
Feb 01 15:00:37 compute-0 lvm[159988]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:00:37 compute-0 lvm[159988]: VG ceph_vg0 finished
Feb 01 15:00:37 compute-0 lvm[159990]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:00:37 compute-0 lvm[159990]: VG ceph_vg2 finished
Feb 01 15:00:37 compute-0 python3.9[159981]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:00:37 compute-0 sudo[159979]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:37 compute-0 bold_brahmagupta[159702]: {}
Feb 01 15:00:37 compute-0 systemd[1]: libpod-d19c41f2521fd697d142449f008f2052d688082e404e27a814e57851e9245645.scope: Deactivated successfully.
Feb 01 15:00:37 compute-0 systemd[1]: libpod-d19c41f2521fd697d142449f008f2052d688082e404e27a814e57851e9245645.scope: Consumed 1.062s CPU time.
Feb 01 15:00:37 compute-0 podman[159639]: 2026-02-01 15:00:37.620041809 +0000 UTC m=+0.970015175 container died d19c41f2521fd697d142449f008f2052d688082e404e27a814e57851e9245645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brahmagupta, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:00:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-056cfdbd7ab0a3182be2157c6e5de3883c3543af0dc222675a18c4004cbad8b4-merged.mount: Deactivated successfully.
Feb 01 15:00:37 compute-0 podman[159639]: 2026-02-01 15:00:37.699734336 +0000 UTC m=+1.049707652 container remove d19c41f2521fd697d142449f008f2052d688082e404e27a814e57851e9245645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 01 15:00:37 compute-0 systemd[1]: libpod-conmon-d19c41f2521fd697d142449f008f2052d688082e404e27a814e57851e9245645.scope: Deactivated successfully.
Feb 01 15:00:37 compute-0 sudo[159414]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:00:37 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:00:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:00:37 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:00:37 compute-0 sudo[160106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 15:00:37 compute-0 sudo[160106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:00:37 compute-0 sudo[160106]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:37 compute-0 sudo[160181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvhvxovsqgvqlhvjmrzgfnnfsrxyfntz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958037.6469526-161-15933766670320/AnsiballZ_file.py'
Feb 01 15:00:37 compute-0 sudo[160181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:37 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:38 compute-0 python3.9[160183]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:00:38 compute-0 sudo[160181]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:38 compute-0 sudo[160333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iloxrqqmpafblgvefuobtsueqqkapslg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958038.2358038-212-129924934310215/AnsiballZ_command.py'
Feb 01 15:00:38 compute-0 sudo[160333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:38 compute-0 python3.9[160335]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:00:38 compute-0 sudo[160333]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:38 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:00:38 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:00:38 compute-0 ceph-mon[75179]: pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:39 compute-0 python3.9[160487]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 01 15:00:39 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:40 compute-0 sudo[160637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpsrgsapfmpyzbkyxomvqppyvlzaozku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958039.795115-230-86010431057515/AnsiballZ_systemd_service.py'
Feb 01 15:00:40 compute-0 sudo[160637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:40 compute-0 python3.9[160639]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 01 15:00:40 compute-0 systemd[1]: Reloading.
Feb 01 15:00:40 compute-0 systemd-rc-local-generator[160669]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:00:40 compute-0 systemd-sysv-generator[160674]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:00:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:00:40 compute-0 sudo[160637]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:41 compute-0 ceph-mon[75179]: pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:41 compute-0 sudo[160825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjonxbgbwwjbvnckxcfqyufwrnsuomdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958040.805003-238-117423713855248/AnsiballZ_command.py'
Feb 01 15:00:41 compute-0 sudo[160825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:41 compute-0 python3.9[160827]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:00:41 compute-0 sudo[160825]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:41 compute-0 sudo[160978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndnaytpatubdxfsdzhsvjqpvvkbwravu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958041.429648-238-255178394422199/AnsiballZ_command.py'
Feb 01 15:00:41 compute-0 sudo[160978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:41 compute-0 python3.9[160980]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:00:41 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:41 compute-0 sudo[160978]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:42 compute-0 sudo[161131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdzrndfmlaxrjyfenvgtojpgrnvujtgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958042.167739-238-143389211779585/AnsiballZ_command.py'
Feb 01 15:00:42 compute-0 sudo[161131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:42 compute-0 python3.9[161133]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:00:42 compute-0 sudo[161131]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:43 compute-0 ceph-mon[75179]: pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:43 compute-0 sudo[161284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfnnkeilxoashnfjgxqywmilfjmuhfzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958042.763337-238-31654892479736/AnsiballZ_command.py'
Feb 01 15:00:43 compute-0 sudo[161284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:43 compute-0 python3.9[161286]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:00:43 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:44 compute-0 sudo[161284]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:44 compute-0 sudo[161437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndeuondhxsllsbgtfuexbcwhijwrhsbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958044.4758663-238-114881193795421/AnsiballZ_command.py'
Feb 01 15:00:44 compute-0 sudo[161437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:44 compute-0 python3.9[161439]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:00:44 compute-0 sudo[161437]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:45 compute-0 ceph-mon[75179]: pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:45 compute-0 sudo[161590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrctcucqgqolaqbmzbhfbleurlusjimw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958045.0959437-238-248699157063917/AnsiballZ_command.py'
Feb 01 15:00:45 compute-0 sudo[161590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:00:45 compute-0 python3.9[161592]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:00:45 compute-0 sudo[161590]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:45 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:45 compute-0 sudo[161743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbsrvrgyhwvefwbhnwrpiwbvmdhudmca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958045.7294662-238-153133010589325/AnsiballZ_command.py'
Feb 01 15:00:45 compute-0 sudo[161743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:46 compute-0 python3.9[161745]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:00:46 compute-0 sudo[161743]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:47 compute-0 ceph-mon[75179]: pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:47 compute-0 sudo[161896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alncevnrdwxlpmtqwunhpfposjvsuzof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958046.5674167-292-177239763002036/AnsiballZ_getent.py'
Feb 01 15:00:47 compute-0 sudo[161896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:47 compute-0 python3.9[161898]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Feb 01 15:00:47 compute-0 sudo[161896]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:00:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:00:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:00:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:00:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:00:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:00:48 compute-0 ceph-mon[75179]: pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:49 compute-0 sudo[162049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwieejxgjggptmcubekgioelhphqgxbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958047.5472167-300-180248398493658/AnsiballZ_group.py'
Feb 01 15:00:49 compute-0 sudo[162049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:49 compute-0 python3.9[162051]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 01 15:00:49 compute-0 groupadd[162052]: group added to /etc/group: name=libvirt, GID=42473
Feb 01 15:00:49 compute-0 groupadd[162052]: group added to /etc/gshadow: name=libvirt
Feb 01 15:00:49 compute-0 groupadd[162052]: new group: name=libvirt, GID=42473
Feb 01 15:00:49 compute-0 sudo[162049]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:49 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:50 compute-0 sudo[162207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czozspaggtumfcpjdcjjmhimsjayocxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958049.6287732-308-209496012238201/AnsiballZ_user.py'
Feb 01 15:00:50 compute-0 sudo[162207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:50 compute-0 python3.9[162209]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 01 15:00:50 compute-0 useradd[162211]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Feb 01 15:00:50 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 01 15:00:50 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 01 15:00:50 compute-0 sudo[162207]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:00:51 compute-0 ceph-mon[75179]: pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:51 compute-0 sudo[162368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwtemymtijbcmlvkxyuoyusihadpeopf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958050.8195963-319-10856205928398/AnsiballZ_setup.py'
Feb 01 15:00:51 compute-0 sudo[162368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:51 compute-0 python3.9[162370]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 01 15:00:51 compute-0 sudo[162368]: pam_unix(sudo:session): session closed for user root
Feb 01 15:00:51 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:52 compute-0 sudo[162452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojynykguxknjedrrpibacmplkbmvcaby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958050.8195963-319-10856205928398/AnsiballZ_dnf.py'
Feb 01 15:00:52 compute-0 sudo[162452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:00:52 compute-0 python3.9[162454]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 15:00:53 compute-0 ceph-mon[75179]: pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:53 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:55 compute-0 ceph-mon[75179]: pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:00:55 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:57 compute-0 ceph-mon[75179]: pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:57 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:59 compute-0 ceph-mon[75179]: pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:00:59 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:01:01 compute-0 ceph-mon[75179]: pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:01 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 15:01:01 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 5615 writes, 24K keys, 5615 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5615 writes, 888 syncs, 6.32 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5615 writes, 24K keys, 5615 commit groups, 1.0 writes per commit group, ingest: 18.67 MB, 0.03 MB/s
                                           Interval WAL: 5615 writes, 888 syncs, 6.32 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b61223a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b61223a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b61223a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 01 15:01:01 compute-0 CROND[162640]: (root) CMD (run-parts /etc/cron.hourly)
Feb 01 15:01:01 compute-0 run-parts[162643]: (/etc/cron.hourly) starting 0anacron
Feb 01 15:01:01 compute-0 anacron[162651]: Anacron started on 2026-02-01
Feb 01 15:01:01 compute-0 anacron[162651]: Will run job `cron.daily' in 13 min.
Feb 01 15:01:01 compute-0 anacron[162651]: Will run job `cron.weekly' in 33 min.
Feb 01 15:01:01 compute-0 anacron[162651]: Will run job `cron.monthly' in 53 min.
Feb 01 15:01:01 compute-0 anacron[162651]: Jobs will be executed sequentially
Feb 01 15:01:01 compute-0 run-parts[162653]: (/etc/cron.hourly) finished 0anacron
Feb 01 15:01:01 compute-0 CROND[162639]: (root) CMDEND (run-parts /etc/cron.hourly)
Feb 01 15:01:01 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:03 compute-0 ceph-mon[75179]: pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:03 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:04 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 15:01:04 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 6923 writes, 28K keys, 6923 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6923 writes, 1318 syncs, 5.25 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6923 writes, 28K keys, 6923 commit groups, 1.0 writes per commit group, ingest: 19.77 MB, 0.03 MB/s
                                           Interval WAL: 6923 writes, 1318 syncs, 5.25 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 01 15:01:05 compute-0 ceph-mon[75179]: pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:01:05 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:07 compute-0 podman[162660]: 2026-02-01 15:01:06.99968666 +0000 UTC m=+0.078213357 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 01 15:01:07 compute-0 podman[162661]: 2026-02-01 15:01:07.035160376 +0000 UTC m=+0.113787296 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 01 15:01:07 compute-0 ceph-mon[75179]: pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:01:07.792 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:01:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:01:07.793 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:01:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:01:07.793 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:01:07 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:08 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 15:01:08 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5503 writes, 23K keys, 5503 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5503 writes, 810 syncs, 6.79 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5503 writes, 23K keys, 5503 commit groups, 1.0 writes per commit group, ingest: 18.44 MB, 0.03 MB/s
                                           Interval WAL: 5503 writes, 810 syncs, 6.79 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 01 15:01:09 compute-0 ceph-mon[75179]: pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:09 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:01:11 compute-0 ceph-mgr[75469]: [devicehealth INFO root] Check health
Feb 01 15:01:11 compute-0 ceph-mon[75179]: pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:11 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:13 compute-0 ceph-mon[75179]: pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:13 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:14 compute-0 kernel: SELinux:  Converting 2777 SID table entries...
Feb 01 15:01:14 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 01 15:01:14 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 01 15:01:14 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 01 15:01:14 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 01 15:01:14 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 01 15:01:14 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 01 15:01:14 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 01 15:01:15 compute-0 ceph-mon[75179]: pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:01:15 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:17 compute-0 ceph-mon[75179]: pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:01:17
Feb 01 15:01:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:01:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:01:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['.mgr', 'vms', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'backups', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root']
Feb 01 15:01:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:01:17 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:01:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:01:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:01:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:01:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:01:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:01:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:01:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:01:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:01:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:01:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:01:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:01:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:01:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:01:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:01:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:01:19 compute-0 ceph-mon[75179]: pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:19 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:01:21 compute-0 ceph-mon[75179]: pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:21 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:23 compute-0 ceph-mon[75179]: pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:23 compute-0 kernel: SELinux:  Converting 2777 SID table entries...
Feb 01 15:01:23 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 01 15:01:23 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 01 15:01:23 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 01 15:01:23 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 01 15:01:23 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 01 15:01:23 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 01 15:01:23 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 01 15:01:23 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:25 compute-0 ceph-mon[75179]: pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:01:25 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:26 compute-0 ceph-mon[75179]: pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:27 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:01:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:01:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:01:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:01:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:01:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:01:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:01:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:01:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:01:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:01:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:01:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:01:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb 01 15:01:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:01:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:01:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:01:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:01:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:01:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:01:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:01:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:01:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:01:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:01:29 compute-0 ceph-mon[75179]: pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:29 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:01:31 compute-0 ceph-mon[75179]: pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:31 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:33 compute-0 ceph-mon[75179]: pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:33 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:35 compute-0 ceph-mon[75179]: pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:01:35 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:36 compute-0 ceph-mon[75179]: pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:37 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Feb 01 15:01:37 compute-0 sudo[166085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:01:37 compute-0 sudo[166085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:01:37 compute-0 sudo[166085]: pam_unix(sudo:session): session closed for user root
Feb 01 15:01:37 compute-0 sudo[166188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 15:01:37 compute-0 sudo[166188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:01:37 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:37 compute-0 podman[166150]: 2026-02-01 15:01:37.972074067 +0000 UTC m=+0.090669304 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb 01 15:01:37 compute-0 podman[166162]: 2026-02-01 15:01:37.996100511 +0000 UTC m=+0.114908054 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Feb 01 15:01:38 compute-0 sudo[166188]: pam_unix(sudo:session): session closed for user root
Feb 01 15:01:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 01 15:01:38 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 01 15:01:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:01:38 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:01:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 15:01:38 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:01:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 15:01:38 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:01:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 15:01:38 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:01:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 15:01:38 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:01:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:01:38 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:01:38 compute-0 sudo[166828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:01:38 compute-0 sudo[166828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:01:38 compute-0 sudo[166828]: pam_unix(sudo:session): session closed for user root
Feb 01 15:01:38 compute-0 sudo[166898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 15:01:38 compute-0 sudo[166898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:01:38 compute-0 podman[167179]: 2026-02-01 15:01:38.924505424 +0000 UTC m=+0.093446521 container create 6e11e3e43a533cd079df4d4c2c1f4cc4cd959c4a3a9f912596302f36b8fb0212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_neumann, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 01 15:01:38 compute-0 podman[167179]: 2026-02-01 15:01:38.851338002 +0000 UTC m=+0.020279129 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:01:38 compute-0 systemd[1]: Started libpod-conmon-6e11e3e43a533cd079df4d4c2c1f4cc4cd959c4a3a9f912596302f36b8fb0212.scope.
Feb 01 15:01:39 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:01:39 compute-0 podman[167179]: 2026-02-01 15:01:39.024456187 +0000 UTC m=+0.193397294 container init 6e11e3e43a533cd079df4d4c2c1f4cc4cd959c4a3a9f912596302f36b8fb0212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_neumann, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 01 15:01:39 compute-0 podman[167179]: 2026-02-01 15:01:39.031572536 +0000 UTC m=+0.200513623 container start 6e11e3e43a533cd079df4d4c2c1f4cc4cd959c4a3a9f912596302f36b8fb0212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 01 15:01:39 compute-0 loving_neumann[167354]: 167 167
Feb 01 15:01:39 compute-0 systemd[1]: libpod-6e11e3e43a533cd079df4d4c2c1f4cc4cd959c4a3a9f912596302f36b8fb0212.scope: Deactivated successfully.
Feb 01 15:01:39 compute-0 podman[167179]: 2026-02-01 15:01:39.147531178 +0000 UTC m=+0.316472315 container attach 6e11e3e43a533cd079df4d4c2c1f4cc4cd959c4a3a9f912596302f36b8fb0212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_neumann, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 01 15:01:39 compute-0 podman[167179]: 2026-02-01 15:01:39.148054422 +0000 UTC m=+0.316995529 container died 6e11e3e43a533cd079df4d4c2c1f4cc4cd959c4a3a9f912596302f36b8fb0212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_neumann, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:01:39 compute-0 ceph-mon[75179]: pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 01 15:01:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:01:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:01:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:01:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:01:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:01:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:01:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-8956a1a0753ceeca025308bc2f83fabd481a167ed3e7ec205dd71bfcc8d96580-merged.mount: Deactivated successfully.
Feb 01 15:01:39 compute-0 podman[167179]: 2026-02-01 15:01:39.345850449 +0000 UTC m=+0.514791566 container remove 6e11e3e43a533cd079df4d4c2c1f4cc4cd959c4a3a9f912596302f36b8fb0212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_neumann, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 01 15:01:39 compute-0 systemd[1]: libpod-conmon-6e11e3e43a533cd079df4d4c2c1f4cc4cd959c4a3a9f912596302f36b8fb0212.scope: Deactivated successfully.
Feb 01 15:01:39 compute-0 podman[167845]: 2026-02-01 15:01:39.53987969 +0000 UTC m=+0.077649679 container create 1fb7b506f2200d0693829cd93c5ab92549fb50e4ffaa7b7e523ffc5f39f41e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:01:39 compute-0 systemd[1]: Started libpod-conmon-1fb7b506f2200d0693829cd93c5ab92549fb50e4ffaa7b7e523ffc5f39f41e7a.scope.
Feb 01 15:01:39 compute-0 podman[167845]: 2026-02-01 15:01:39.52312038 +0000 UTC m=+0.060890419 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:01:39 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/780e2a1ecc39c935a0ad2556b87b7ee7b89fc1c79faad53c6faa66a7b5a905e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/780e2a1ecc39c935a0ad2556b87b7ee7b89fc1c79faad53c6faa66a7b5a905e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/780e2a1ecc39c935a0ad2556b87b7ee7b89fc1c79faad53c6faa66a7b5a905e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/780e2a1ecc39c935a0ad2556b87b7ee7b89fc1c79faad53c6faa66a7b5a905e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/780e2a1ecc39c935a0ad2556b87b7ee7b89fc1c79faad53c6faa66a7b5a905e7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 15:01:39 compute-0 podman[167845]: 2026-02-01 15:01:39.632702542 +0000 UTC m=+0.170472531 container init 1fb7b506f2200d0693829cd93c5ab92549fb50e4ffaa7b7e523ffc5f39f41e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Feb 01 15:01:39 compute-0 podman[167845]: 2026-02-01 15:01:39.64119256 +0000 UTC m=+0.178962589 container start 1fb7b506f2200d0693829cd93c5ab92549fb50e4ffaa7b7e523ffc5f39f41e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_pare, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:01:39 compute-0 podman[167845]: 2026-02-01 15:01:39.645229434 +0000 UTC m=+0.182999423 container attach 1fb7b506f2200d0693829cd93c5ab92549fb50e4ffaa7b7e523ffc5f39f41e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_pare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:01:39 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:40 compute-0 confident_pare[167989]: --> passed data devices: 0 physical, 3 LVM
Feb 01 15:01:40 compute-0 confident_pare[167989]: --> All data devices are unavailable
Feb 01 15:01:40 compute-0 systemd[1]: libpod-1fb7b506f2200d0693829cd93c5ab92549fb50e4ffaa7b7e523ffc5f39f41e7a.scope: Deactivated successfully.
Feb 01 15:01:40 compute-0 podman[167845]: 2026-02-01 15:01:40.065817907 +0000 UTC m=+0.603587926 container died 1fb7b506f2200d0693829cd93c5ab92549fb50e4ffaa7b7e523ffc5f39f41e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 01 15:01:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-780e2a1ecc39c935a0ad2556b87b7ee7b89fc1c79faad53c6faa66a7b5a905e7-merged.mount: Deactivated successfully.
Feb 01 15:01:40 compute-0 podman[167845]: 2026-02-01 15:01:40.105818148 +0000 UTC m=+0.643588137 container remove 1fb7b506f2200d0693829cd93c5ab92549fb50e4ffaa7b7e523ffc5f39f41e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 01 15:01:40 compute-0 systemd[1]: libpod-conmon-1fb7b506f2200d0693829cd93c5ab92549fb50e4ffaa7b7e523ffc5f39f41e7a.scope: Deactivated successfully.
Feb 01 15:01:40 compute-0 sudo[166898]: pam_unix(sudo:session): session closed for user root
Feb 01 15:01:40 compute-0 sudo[168514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:01:40 compute-0 sudo[168514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:01:40 compute-0 sudo[168514]: pam_unix(sudo:session): session closed for user root
Feb 01 15:01:40 compute-0 sudo[168584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 15:01:40 compute-0 sudo[168584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:01:40 compute-0 podman[168848]: 2026-02-01 15:01:40.47178271 +0000 UTC m=+0.032695388 container create 2c0b3b7f72e0add12412b4e591d67ea4a4bc1ba93beb59ecdef51dbff563e0cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lederberg, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:01:40 compute-0 systemd[1]: Started libpod-conmon-2c0b3b7f72e0add12412b4e591d67ea4a4bc1ba93beb59ecdef51dbff563e0cf.scope.
Feb 01 15:01:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:01:40 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:01:40 compute-0 podman[168848]: 2026-02-01 15:01:40.542338898 +0000 UTC m=+0.103251606 container init 2c0b3b7f72e0add12412b4e591d67ea4a4bc1ba93beb59ecdef51dbff563e0cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lederberg, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:01:40 compute-0 podman[168848]: 2026-02-01 15:01:40.547571415 +0000 UTC m=+0.108484103 container start 2c0b3b7f72e0add12412b4e591d67ea4a4bc1ba93beb59ecdef51dbff563e0cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:01:40 compute-0 jolly_lederberg[168921]: 167 167
Feb 01 15:01:40 compute-0 systemd[1]: libpod-2c0b3b7f72e0add12412b4e591d67ea4a4bc1ba93beb59ecdef51dbff563e0cf.scope: Deactivated successfully.
Feb 01 15:01:40 compute-0 podman[168848]: 2026-02-01 15:01:40.552133783 +0000 UTC m=+0.113046481 container attach 2c0b3b7f72e0add12412b4e591d67ea4a4bc1ba93beb59ecdef51dbff563e0cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lederberg, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 01 15:01:40 compute-0 podman[168848]: 2026-02-01 15:01:40.552508414 +0000 UTC m=+0.113421092 container died 2c0b3b7f72e0add12412b4e591d67ea4a4bc1ba93beb59ecdef51dbff563e0cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lederberg, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:01:40 compute-0 podman[168848]: 2026-02-01 15:01:40.457745536 +0000 UTC m=+0.018658234 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:01:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-e82edcebe7af1e35ef35394bf42fc8017e4c31d68b0194216a0fbfe3789bd57b-merged.mount: Deactivated successfully.
Feb 01 15:01:40 compute-0 podman[168848]: 2026-02-01 15:01:40.578581485 +0000 UTC m=+0.139494163 container remove 2c0b3b7f72e0add12412b4e591d67ea4a4bc1ba93beb59ecdef51dbff563e0cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 01 15:01:40 compute-0 systemd[1]: libpod-conmon-2c0b3b7f72e0add12412b4e591d67ea4a4bc1ba93beb59ecdef51dbff563e0cf.scope: Deactivated successfully.
Feb 01 15:01:40 compute-0 podman[169116]: 2026-02-01 15:01:40.703648482 +0000 UTC m=+0.031059192 container create 2955cf167346f96ab371a3e115e357a22d267f1504b0ba0556a2272454d8214b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_panini, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 01 15:01:40 compute-0 systemd[1]: Started libpod-conmon-2955cf167346f96ab371a3e115e357a22d267f1504b0ba0556a2272454d8214b.scope.
Feb 01 15:01:40 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:01:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07009ec560a4e9b636f15e2ef7ec07c2be4bb4460e79b7faa2f28e4e1e71c99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:01:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07009ec560a4e9b636f15e2ef7ec07c2be4bb4460e79b7faa2f28e4e1e71c99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:01:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07009ec560a4e9b636f15e2ef7ec07c2be4bb4460e79b7faa2f28e4e1e71c99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:01:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07009ec560a4e9b636f15e2ef7ec07c2be4bb4460e79b7faa2f28e4e1e71c99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:01:40 compute-0 podman[169116]: 2026-02-01 15:01:40.688920699 +0000 UTC m=+0.016331419 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:01:40 compute-0 podman[169116]: 2026-02-01 15:01:40.78954149 +0000 UTC m=+0.116952280 container init 2955cf167346f96ab371a3e115e357a22d267f1504b0ba0556a2272454d8214b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 01 15:01:40 compute-0 podman[169116]: 2026-02-01 15:01:40.797874584 +0000 UTC m=+0.125285294 container start 2955cf167346f96ab371a3e115e357a22d267f1504b0ba0556a2272454d8214b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_panini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:01:40 compute-0 podman[169116]: 2026-02-01 15:01:40.80274453 +0000 UTC m=+0.130155340 container attach 2955cf167346f96ab371a3e115e357a22d267f1504b0ba0556a2272454d8214b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:01:41 compute-0 pedantic_panini[169212]: {
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:     "0": [
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:         {
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "devices": [
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "/dev/loop3"
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             ],
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "lv_name": "ceph_lv0",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "lv_size": "21470642176",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "name": "ceph_lv0",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "tags": {
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.cluster_name": "ceph",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.crush_device_class": "",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.encrypted": "0",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.objectstore": "bluestore",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.osd_id": "0",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.type": "block",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.vdo": "0",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.with_tpm": "0"
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             },
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "type": "block",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "vg_name": "ceph_vg0"
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:         }
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:     ],
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:     "1": [
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:         {
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "devices": [
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "/dev/loop4"
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             ],
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "lv_name": "ceph_lv1",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "lv_size": "21470642176",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "name": "ceph_lv1",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "tags": {
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.cluster_name": "ceph",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.crush_device_class": "",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.encrypted": "0",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.objectstore": "bluestore",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.osd_id": "1",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.type": "block",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.vdo": "0",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.with_tpm": "0"
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             },
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "type": "block",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "vg_name": "ceph_vg1"
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:         }
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:     ],
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:     "2": [
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:         {
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "devices": [
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "/dev/loop5"
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             ],
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "lv_name": "ceph_lv2",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "lv_size": "21470642176",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "name": "ceph_lv2",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "tags": {
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.cluster_name": "ceph",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.crush_device_class": "",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.encrypted": "0",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.objectstore": "bluestore",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.osd_id": "2",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.type": "block",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.vdo": "0",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:                 "ceph.with_tpm": "0"
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             },
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "type": "block",
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:             "vg_name": "ceph_vg2"
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:         }
Feb 01 15:01:41 compute-0 pedantic_panini[169212]:     ]
Feb 01 15:01:41 compute-0 pedantic_panini[169212]: }
Feb 01 15:01:41 compute-0 systemd[1]: libpod-2955cf167346f96ab371a3e115e357a22d267f1504b0ba0556a2272454d8214b.scope: Deactivated successfully.
Feb 01 15:01:41 compute-0 podman[169116]: 2026-02-01 15:01:41.089665066 +0000 UTC m=+0.417075806 container died 2955cf167346f96ab371a3e115e357a22d267f1504b0ba0556a2272454d8214b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_panini, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:01:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-a07009ec560a4e9b636f15e2ef7ec07c2be4bb4460e79b7faa2f28e4e1e71c99-merged.mount: Deactivated successfully.
Feb 01 15:01:41 compute-0 podman[169116]: 2026-02-01 15:01:41.134716759 +0000 UTC m=+0.462127469 container remove 2955cf167346f96ab371a3e115e357a22d267f1504b0ba0556a2272454d8214b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 01 15:01:41 compute-0 systemd[1]: libpod-conmon-2955cf167346f96ab371a3e115e357a22d267f1504b0ba0556a2272454d8214b.scope: Deactivated successfully.
Feb 01 15:01:41 compute-0 ceph-mon[75179]: pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:41 compute-0 sudo[168584]: pam_unix(sudo:session): session closed for user root
Feb 01 15:01:41 compute-0 sudo[169611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:01:41 compute-0 sudo[169611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:01:41 compute-0 sudo[169611]: pam_unix(sudo:session): session closed for user root
Feb 01 15:01:41 compute-0 sudo[169686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 15:01:41 compute-0 sudo[169686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:01:41 compute-0 podman[169960]: 2026-02-01 15:01:41.527944966 +0000 UTC m=+0.034736555 container create 3379b1976e4a4509f25b8039d7c0cd6cc1c52bf324ad6d14db889fd7be30f890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_khorana, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 01 15:01:41 compute-0 systemd[1]: Started libpod-conmon-3379b1976e4a4509f25b8039d7c0cd6cc1c52bf324ad6d14db889fd7be30f890.scope.
Feb 01 15:01:41 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:01:41 compute-0 podman[169960]: 2026-02-01 15:01:41.591768515 +0000 UTC m=+0.098560124 container init 3379b1976e4a4509f25b8039d7c0cd6cc1c52bf324ad6d14db889fd7be30f890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:01:41 compute-0 podman[169960]: 2026-02-01 15:01:41.595179501 +0000 UTC m=+0.101971110 container start 3379b1976e4a4509f25b8039d7c0cd6cc1c52bf324ad6d14db889fd7be30f890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_khorana, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 01 15:01:41 compute-0 podman[169960]: 2026-02-01 15:01:41.598267108 +0000 UTC m=+0.105058747 container attach 3379b1976e4a4509f25b8039d7c0cd6cc1c52bf324ad6d14db889fd7be30f890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:01:41 compute-0 eloquent_khorana[170057]: 167 167
Feb 01 15:01:41 compute-0 systemd[1]: libpod-3379b1976e4a4509f25b8039d7c0cd6cc1c52bf324ad6d14db889fd7be30f890.scope: Deactivated successfully.
Feb 01 15:01:41 compute-0 podman[169960]: 2026-02-01 15:01:41.599683317 +0000 UTC m=+0.106474916 container died 3379b1976e4a4509f25b8039d7c0cd6cc1c52bf324ad6d14db889fd7be30f890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_khorana, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:01:41 compute-0 podman[169960]: 2026-02-01 15:01:41.512632736 +0000 UTC m=+0.019424355 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:01:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1ce00a07f0d0f5bb6ad3ac6ef0cd8d84dd392be34690e6ce535dde83ffd93c3-merged.mount: Deactivated successfully.
Feb 01 15:01:41 compute-0 podman[169960]: 2026-02-01 15:01:41.629094252 +0000 UTC m=+0.135885861 container remove 3379b1976e4a4509f25b8039d7c0cd6cc1c52bf324ad6d14db889fd7be30f890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_khorana, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:01:41 compute-0 systemd[1]: libpod-conmon-3379b1976e4a4509f25b8039d7c0cd6cc1c52bf324ad6d14db889fd7be30f890.scope: Deactivated successfully.
Feb 01 15:01:41 compute-0 podman[170233]: 2026-02-01 15:01:41.759096327 +0000 UTC m=+0.044866429 container create eff7b18a557d511d92f5b3e59919faefb3599728ff8e7097e24ff8293693a565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_jemison, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 01 15:01:41 compute-0 systemd[1]: Started libpod-conmon-eff7b18a557d511d92f5b3e59919faefb3599728ff8e7097e24ff8293693a565.scope.
Feb 01 15:01:41 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:01:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf090da88caf025f311ad1c49f21e3a0218a523ed3de86f2011f00841783f695/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:01:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf090da88caf025f311ad1c49f21e3a0218a523ed3de86f2011f00841783f695/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:01:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf090da88caf025f311ad1c49f21e3a0218a523ed3de86f2011f00841783f695/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:01:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf090da88caf025f311ad1c49f21e3a0218a523ed3de86f2011f00841783f695/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:01:41 compute-0 podman[170233]: 2026-02-01 15:01:41.737107081 +0000 UTC m=+0.022877203 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:01:41 compute-0 podman[170233]: 2026-02-01 15:01:41.841444137 +0000 UTC m=+0.127214289 container init eff7b18a557d511d92f5b3e59919faefb3599728ff8e7097e24ff8293693a565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_jemison, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:01:41 compute-0 podman[170233]: 2026-02-01 15:01:41.847810735 +0000 UTC m=+0.133580827 container start eff7b18a557d511d92f5b3e59919faefb3599728ff8e7097e24ff8293693a565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 01 15:01:41 compute-0 podman[170233]: 2026-02-01 15:01:41.850758328 +0000 UTC m=+0.136528530 container attach eff7b18a557d511d92f5b3e59919faefb3599728ff8e7097e24ff8293693a565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_jemison, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:01:41 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:42 compute-0 lvm[170973]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:01:42 compute-0 lvm[170971]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:01:42 compute-0 lvm[170971]: VG ceph_vg0 finished
Feb 01 15:01:42 compute-0 lvm[170973]: VG ceph_vg1 finished
Feb 01 15:01:42 compute-0 lvm[170984]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:01:42 compute-0 lvm[170984]: VG ceph_vg2 finished
Feb 01 15:01:42 compute-0 practical_jemison[170327]: {}
Feb 01 15:01:42 compute-0 systemd[1]: libpod-eff7b18a557d511d92f5b3e59919faefb3599728ff8e7097e24ff8293693a565.scope: Deactivated successfully.
Feb 01 15:01:42 compute-0 podman[170233]: 2026-02-01 15:01:42.531193858 +0000 UTC m=+0.816963950 container died eff7b18a557d511d92f5b3e59919faefb3599728ff8e7097e24ff8293693a565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 01 15:01:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf090da88caf025f311ad1c49f21e3a0218a523ed3de86f2011f00841783f695-merged.mount: Deactivated successfully.
Feb 01 15:01:42 compute-0 podman[170233]: 2026-02-01 15:01:42.56873744 +0000 UTC m=+0.854507522 container remove eff7b18a557d511d92f5b3e59919faefb3599728ff8e7097e24ff8293693a565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_jemison, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 01 15:01:42 compute-0 systemd[1]: libpod-conmon-eff7b18a557d511d92f5b3e59919faefb3599728ff8e7097e24ff8293693a565.scope: Deactivated successfully.
Feb 01 15:01:42 compute-0 sudo[169686]: pam_unix(sudo:session): session closed for user root
Feb 01 15:01:42 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:01:42 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:01:42 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:01:42 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:01:42 compute-0 sudo[171234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 15:01:42 compute-0 sudo[171234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:01:42 compute-0 sudo[171234]: pam_unix(sudo:session): session closed for user root
Feb 01 15:01:43 compute-0 ceph-mon[75179]: pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:43 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:01:43 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:01:43 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:45 compute-0 ceph-mon[75179]: pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:01:45 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:47 compute-0 ceph-mon[75179]: pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:47 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:01:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:01:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:01:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:01:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:01:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:01:49 compute-0 ceph-mon[75179]: pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:49 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:01:51 compute-0 ceph-mon[75179]: pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:51 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:53 compute-0 ceph-mon[75179]: pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:53 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:55 compute-0 ceph-mon[75179]: pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:01:55 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:57 compute-0 ceph-mon[75179]: pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:57 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:59 compute-0 ceph-mon[75179]: pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:01:59 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:00 compute-0 ceph-mon[75179]: pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:02:01 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:02:03 compute-0 ceph-mon[75179]: pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:02:03 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:02:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:02:05 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:02:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:02:07.794 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:02:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:02:07.795 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:02:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:02:07.795 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:02:07 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:02:08 compute-0 ceph-mon[75179]: pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:02:09 compute-0 podman[180249]: 2026-02-01 15:02:09.04848079 +0000 UTC m=+0.127527387 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Feb 01 15:02:09 compute-0 podman[180250]: 2026-02-01 15:02:09.083671447 +0000 UTC m=+0.166841480 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Feb 01 15:02:09 compute-0 kernel: SELinux:  Converting 2778 SID table entries...
Feb 01 15:02:09 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Feb 01 15:02:09 compute-0 kernel: SELinux:  policy capability open_perms=1
Feb 01 15:02:09 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Feb 01 15:02:09 compute-0 kernel: SELinux:  policy capability always_check_network=0
Feb 01 15:02:09 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 01 15:02:09 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 01 15:02:09 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 01 15:02:09 compute-0 ceph-mon[75179]: pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:02:09 compute-0 ceph-mon[75179]: pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:02:09 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:02:10 compute-0 groupadd[180303]: group added to /etc/group: name=dnsmasq, GID=992
Feb 01 15:02:10 compute-0 groupadd[180303]: group added to /etc/gshadow: name=dnsmasq
Feb 01 15:02:10 compute-0 groupadd[180303]: new group: name=dnsmasq, GID=992
Feb 01 15:02:10 compute-0 useradd[180310]: new user: name=dnsmasq, UID=991, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Feb 01 15:02:10 compute-0 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Feb 01 15:02:10 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Feb 01 15:02:10 compute-0 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Feb 01 15:02:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:02:11 compute-0 ceph-mon[75179]: pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:02:11 compute-0 groupadd[180323]: group added to /etc/group: name=clevis, GID=991
Feb 01 15:02:11 compute-0 groupadd[180323]: group added to /etc/gshadow: name=clevis
Feb 01 15:02:11 compute-0 groupadd[180323]: new group: name=clevis, GID=991
Feb 01 15:02:11 compute-0 useradd[180330]: new user: name=clevis, UID=990, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Feb 01 15:02:11 compute-0 usermod[180340]: add 'clevis' to group 'tss'
Feb 01 15:02:11 compute-0 usermod[180340]: add 'clevis' to shadow group 'tss'
Feb 01 15:02:11 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:02:13 compute-0 ceph-mon[75179]: pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:02:13 compute-0 polkitd[43475]: Reloading rules
Feb 01 15:02:13 compute-0 polkitd[43475]: Collecting garbage unconditionally...
Feb 01 15:02:13 compute-0 polkitd[43475]: Loading rules from directory /etc/polkit-1/rules.d
Feb 01 15:02:13 compute-0 polkitd[43475]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 01 15:02:13 compute-0 polkitd[43475]: Finished loading, compiling and executing 3 rules
Feb 01 15:02:13 compute-0 polkitd[43475]: Reloading rules
Feb 01 15:02:13 compute-0 polkitd[43475]: Collecting garbage unconditionally...
Feb 01 15:02:13 compute-0 polkitd[43475]: Loading rules from directory /etc/polkit-1/rules.d
Feb 01 15:02:13 compute-0 polkitd[43475]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 01 15:02:13 compute-0 polkitd[43475]: Finished loading, compiling and executing 3 rules
Feb 01 15:02:13 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:15 compute-0 ceph-mon[75179]: pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:02:15 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:17 compute-0 ceph-mon[75179]: pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:17 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Feb 01 15:02:17 compute-0 sshd[1002]: Received signal 15; terminating.
Feb 01 15:02:17 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Feb 01 15:02:17 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Feb 01 15:02:17 compute-0 systemd[1]: sshd.service: Consumed 2.298s CPU time, read 32.0K from disk, written 16.0K to disk.
Feb 01 15:02:17 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Feb 01 15:02:17 compute-0 systemd[1]: Stopping sshd-keygen.target...
Feb 01 15:02:17 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 01 15:02:17 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 01 15:02:17 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 01 15:02:17 compute-0 systemd[1]: Reached target sshd-keygen.target.
Feb 01 15:02:17 compute-0 systemd[1]: Starting OpenSSH server daemon...
Feb 01 15:02:17 compute-0 sshd[181148]: Server listening on 0.0.0.0 port 22.
Feb 01 15:02:17 compute-0 sshd[181148]: Server listening on :: port 22.
Feb 01 15:02:17 compute-0 systemd[1]: Started OpenSSH server daemon.
Feb 01 15:02:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:02:17
Feb 01 15:02:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:02:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:02:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'default.rgw.meta', 'vms', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'volumes', '.rgw.root', 'cephfs.cephfs.meta']
Feb 01 15:02:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:02:17 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:02:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:02:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:02:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:02:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:02:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:02:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:02:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:02:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:02:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:02:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:02:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:02:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:02:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:02:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:02:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:02:19 compute-0 ceph-mon[75179]: pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:19 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 01 15:02:19 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 01 15:02:19 compute-0 systemd[1]: Reloading.
Feb 01 15:02:19 compute-0 systemd-rc-local-generator[181402]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:02:19 compute-0 systemd-sysv-generator[181405]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:02:19 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 01 15:02:19 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:02:21 compute-0 ceph-mon[75179]: pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:21 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:22 compute-0 sudo[162452]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:22 compute-0 ceph-mon[75179]: pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:23 compute-0 sudo[187523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcmdrtedpzsdqrmsbckwbttbisxvivxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958142.6857631-331-171701614242979/AnsiballZ_systemd.py'
Feb 01 15:02:23 compute-0 sudo[187523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:23 compute-0 python3.9[187552]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 01 15:02:23 compute-0 systemd[1]: Reloading.
Feb 01 15:02:23 compute-0 systemd-rc-local-generator[188051]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:02:23 compute-0 systemd-sysv-generator[188054]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:02:23 compute-0 sudo[187523]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:23 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:24 compute-0 sudo[189132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzeaofsojcaseujaobyhepbkgskbrasd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958144.0879352-331-246481039707346/AnsiballZ_systemd.py'
Feb 01 15:02:24 compute-0 sudo[189132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:24 compute-0 python3.9[189157]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 01 15:02:24 compute-0 systemd[1]: Reloading.
Feb 01 15:02:24 compute-0 systemd-sysv-generator[189692]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:02:24 compute-0 systemd-rc-local-generator[189684]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:02:24 compute-0 sudo[189132]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:25 compute-0 ceph-mon[75179]: pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:25 compute-0 sudo[190325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iydrptvbxsmehswtmorqnljozrvbwgxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958145.0962799-331-50708608321199/AnsiballZ_systemd.py'
Feb 01 15:02:25 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 01 15:02:25 compute-0 sudo[190325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:25 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 01 15:02:25 compute-0 systemd[1]: man-db-cache-update.service: Consumed 7.549s CPU time.
Feb 01 15:02:25 compute-0 systemd[1]: run-r25c73ef25da04b0aa43bef90637def35.service: Deactivated successfully.
Feb 01 15:02:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:02:25 compute-0 python3.9[190328]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 01 15:02:25 compute-0 systemd[1]: Reloading.
Feb 01 15:02:25 compute-0 systemd-rc-local-generator[190355]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:02:25 compute-0 systemd-sysv-generator[190361]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:02:25 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:26 compute-0 sudo[190325]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:26 compute-0 sudo[190515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkcgfknytxgowwvkrfplobrpjiyuzxsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958146.214317-331-56911570573442/AnsiballZ_systemd.py'
Feb 01 15:02:26 compute-0 sudo[190515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:26 compute-0 python3.9[190517]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 01 15:02:26 compute-0 systemd[1]: Reloading.
Feb 01 15:02:26 compute-0 systemd-rc-local-generator[190545]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:02:26 compute-0 systemd-sysv-generator[190549]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:02:27 compute-0 ceph-mon[75179]: pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:27 compute-0 sudo[190515]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:27 compute-0 sudo[190704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnrhxdmzpkqzpiosmaugvjjblqibdtnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958147.510756-360-54803868334910/AnsiballZ_systemd.py'
Feb 01 15:02:27 compute-0 sudo[190704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:27 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:02:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:02:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:02:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:02:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:02:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:02:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:02:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:02:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:02:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:02:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:02:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:02:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb 01 15:02:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:02:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:02:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:02:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:02:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:02:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:02:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:02:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:02:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:02:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:02:28 compute-0 python3.9[190706]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 01 15:02:28 compute-0 systemd[1]: Reloading.
Feb 01 15:02:28 compute-0 systemd-rc-local-generator[190736]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:02:28 compute-0 systemd-sysv-generator[190739]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:02:28 compute-0 sudo[190704]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:28 compute-0 sudo[190896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzakufmntikdqiqvuphxtltctnpxwufw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958148.5748441-360-91951370158255/AnsiballZ_systemd.py'
Feb 01 15:02:28 compute-0 sudo[190896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:29 compute-0 ceph-mon[75179]: pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:29 compute-0 python3.9[190898]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 01 15:02:29 compute-0 systemd[1]: Reloading.
Feb 01 15:02:29 compute-0 systemd-sysv-generator[190932]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:02:29 compute-0 systemd-rc-local-generator[190927]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:02:29 compute-0 sudo[190896]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:29 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:30 compute-0 sudo[191086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgmbtghcrrkuitqnnxwhomkjwdguypsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958149.6962702-360-52641321955838/AnsiballZ_systemd.py'
Feb 01 15:02:30 compute-0 sudo[191086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:30 compute-0 python3.9[191088]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 01 15:02:30 compute-0 systemd[1]: Reloading.
Feb 01 15:02:30 compute-0 systemd-rc-local-generator[191119]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:02:30 compute-0 systemd-sysv-generator[191123]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:02:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:02:30 compute-0 sudo[191086]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:31 compute-0 sudo[191276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csdsnaxfbyekrqhclwedilvjgzzblogg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958150.7586207-360-80290097429384/AnsiballZ_systemd.py'
Feb 01 15:02:31 compute-0 sudo[191276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:31 compute-0 ceph-mon[75179]: pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:31 compute-0 python3.9[191278]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 01 15:02:31 compute-0 sudo[191276]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:31 compute-0 sudo[191431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klcpgnhggykrtnesnfukmgpbebchluky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958151.5704753-360-103849130103235/AnsiballZ_systemd.py'
Feb 01 15:02:31 compute-0 sudo[191431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:31 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:32 compute-0 python3.9[191433]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 01 15:02:32 compute-0 systemd[1]: Reloading.
Feb 01 15:02:32 compute-0 systemd-rc-local-generator[191458]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:02:32 compute-0 systemd-sysv-generator[191463]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:02:32 compute-0 sudo[191431]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:32 compute-0 sudo[191620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czsrkweibyuqajmrztdjduntdbvqurmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958152.7316442-396-252583650397556/AnsiballZ_systemd.py'
Feb 01 15:02:32 compute-0 sudo[191620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:33 compute-0 ceph-mon[75179]: pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:33 compute-0 python3.9[191622]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 01 15:02:33 compute-0 systemd[1]: Reloading.
Feb 01 15:02:33 compute-0 systemd-rc-local-generator[191648]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:02:33 compute-0 systemd-sysv-generator[191652]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:02:33 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Feb 01 15:02:33 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Feb 01 15:02:33 compute-0 sudo[191620]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:33 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:34 compute-0 sudo[191813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldxjngshmneisnrkrdzcpwghraotbgpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958153.8130703-404-147923859279296/AnsiballZ_systemd.py'
Feb 01 15:02:34 compute-0 sudo[191813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:34 compute-0 python3.9[191815]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 01 15:02:34 compute-0 sudo[191813]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:34 compute-0 sudo[191968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kexcxkplcyfkunvihtsneblgujcblplw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958154.6443763-404-62558952382124/AnsiballZ_systemd.py'
Feb 01 15:02:34 compute-0 sudo[191968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:35 compute-0 python3.9[191970]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 01 15:02:35 compute-0 sudo[191968]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:35 compute-0 ceph-mon[75179]: pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:02:35 compute-0 sudo[192123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pstcawgwocawyoqrfpjyivmvohnevewh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958155.391659-404-262540940859572/AnsiballZ_systemd.py'
Feb 01 15:02:35 compute-0 sudo[192123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:35 compute-0 python3.9[192125]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 01 15:02:35 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:36 compute-0 sudo[192123]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:36 compute-0 sudo[192278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gysjtstkvfkeoupwmrvdqminfezyyjft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958156.1197877-404-145482010833947/AnsiballZ_systemd.py'
Feb 01 15:02:36 compute-0 sudo[192278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:36 compute-0 ceph-mon[75179]: pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:36 compute-0 python3.9[192280]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 01 15:02:36 compute-0 sudo[192278]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:37 compute-0 sudo[192433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnyobhlqogppcpnevpkpkjhmyubsvnmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958156.8691635-404-59725523910560/AnsiballZ_systemd.py'
Feb 01 15:02:37 compute-0 sudo[192433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:37 compute-0 python3.9[192435]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 01 15:02:37 compute-0 sudo[192433]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:37 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:38 compute-0 sudo[192588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bloligpexwqenyzokqrqvqqpzdxlnmht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958157.7713757-404-254333783828350/AnsiballZ_systemd.py'
Feb 01 15:02:38 compute-0 sudo[192588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:38 compute-0 python3.9[192590]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 01 15:02:38 compute-0 sudo[192588]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:38 compute-0 sudo[192743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soowdmafwvpofucqwjejwlazxqtpshcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958158.579649-404-248560139556844/AnsiballZ_systemd.py'
Feb 01 15:02:38 compute-0 sudo[192743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:39 compute-0 python3.9[192745]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 01 15:02:39 compute-0 sudo[192743]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:39 compute-0 ceph-mon[75179]: pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:39 compute-0 podman[192747]: 2026-02-01 15:02:39.206680978 +0000 UTC m=+0.085949699 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Feb 01 15:02:39 compute-0 podman[192748]: 2026-02-01 15:02:39.209179048 +0000 UTC m=+0.089437407 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller)
Feb 01 15:02:39 compute-0 sudo[192942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjklqlmscpualzzlpbvlnlexepbdxwdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958159.335608-404-236605546302574/AnsiballZ_systemd.py'
Feb 01 15:02:39 compute-0 sudo[192942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:39 compute-0 python3.9[192944]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 01 15:02:39 compute-0 sudo[192942]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:39 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:40 compute-0 sudo[193097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmhirbepduwriuwcmvtnanxobuwvnqvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958160.0997682-404-160144157532738/AnsiballZ_systemd.py'
Feb 01 15:02:40 compute-0 sudo[193097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:02:40 compute-0 python3.9[193099]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 01 15:02:40 compute-0 sudo[193097]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:41 compute-0 sudo[193252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sllyuslptxqunrqdyyvcvsfzgykwjsmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958160.9179282-404-162156900130467/AnsiballZ_systemd.py'
Feb 01 15:02:41 compute-0 sudo[193252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:41 compute-0 ceph-mon[75179]: pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:41 compute-0 python3.9[193254]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 01 15:02:41 compute-0 sudo[193252]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:41 compute-0 sudo[193407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyrvffsdkletypzozhjwmefsxhdqhjnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958161.725839-404-166890464080544/AnsiballZ_systemd.py'
Feb 01 15:02:41 compute-0 sudo[193407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:41 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:42 compute-0 python3.9[193409]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 01 15:02:42 compute-0 sudo[193407]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:42 compute-0 sudo[193513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:02:42 compute-0 sudo[193513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:02:42 compute-0 sudo[193513]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:42 compute-0 sudo[193561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 01 15:02:42 compute-0 sudo[193561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:02:42 compute-0 sudo[193612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmwvqzbcdmtwmipcywbjpuwocslvwsst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958162.4867208-404-143309905969649/AnsiballZ_systemd.py'
Feb 01 15:02:42 compute-0 sudo[193612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:43 compute-0 python3.9[193614]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 01 15:02:43 compute-0 sudo[193612]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:43 compute-0 podman[193661]: 2026-02-01 15:02:43.151062294 +0000 UTC m=+0.055468975 container exec 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 01 15:02:43 compute-0 podman[193661]: 2026-02-01 15:02:43.271636913 +0000 UTC m=+0.176043594 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:02:43 compute-0 ceph-mon[75179]: pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:43 compute-0 sudo[193924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyhvrjmuhezjofxcvpsqcfzvjsiaetqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958163.2745667-404-49025100768281/AnsiballZ_systemd.py'
Feb 01 15:02:43 compute-0 sudo[193924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:43 compute-0 python3.9[193928]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 01 15:02:43 compute-0 sudo[193924]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:43 compute-0 sudo[193561]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:43 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:02:43 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:02:43 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:02:43 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:02:43 compute-0 sudo[194028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:02:43 compute-0 sudo[194028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:02:43 compute-0 sudo[194028]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:43 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:44 compute-0 sudo[194054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 15:02:44 compute-0 sudo[194054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:02:44 compute-0 sudo[194217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jejfiwdenapmusjwhuvniohzttdzupbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958164.0016613-404-121951046392548/AnsiballZ_systemd.py'
Feb 01 15:02:44 compute-0 sudo[194217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:44 compute-0 sudo[194054]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:44 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:02:44 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:02:44 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 15:02:44 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:02:44 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 15:02:44 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:02:44 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 15:02:44 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:02:44 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 15:02:44 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:02:44 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:02:44 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:02:44 compute-0 sudo[194237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:02:44 compute-0 sudo[194237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:02:44 compute-0 sudo[194237]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:44 compute-0 sudo[194262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 15:02:44 compute-0 sudo[194262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:02:44 compute-0 python3.9[194219]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 01 15:02:44 compute-0 sudo[194217]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:44 compute-0 podman[194327]: 2026-02-01 15:02:44.85480569 +0000 UTC m=+0.050201387 container create 411cf56e4102c5306278bbe56755f6a9a1aa56ebfb3f4fa94a1fbaff2c777fbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 01 15:02:44 compute-0 systemd[1]: Started libpod-conmon-411cf56e4102c5306278bbe56755f6a9a1aa56ebfb3f4fa94a1fbaff2c777fbf.scope.
Feb 01 15:02:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:02:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:02:44 compute-0 ceph-mon[75179]: pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:02:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:02:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:02:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:02:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:02:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:02:44 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:02:44 compute-0 podman[194327]: 2026-02-01 15:02:44.83622591 +0000 UTC m=+0.031621627 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:02:44 compute-0 podman[194327]: 2026-02-01 15:02:44.941736876 +0000 UTC m=+0.137132593 container init 411cf56e4102c5306278bbe56755f6a9a1aa56ebfb3f4fa94a1fbaff2c777fbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_tharp, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 01 15:02:44 compute-0 podman[194327]: 2026-02-01 15:02:44.947255851 +0000 UTC m=+0.142651558 container start 411cf56e4102c5306278bbe56755f6a9a1aa56ebfb3f4fa94a1fbaff2c777fbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 01 15:02:44 compute-0 podman[194327]: 2026-02-01 15:02:44.951388386 +0000 UTC m=+0.146784103 container attach 411cf56e4102c5306278bbe56755f6a9a1aa56ebfb3f4fa94a1fbaff2c777fbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_tharp, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:02:44 compute-0 zealous_tharp[194343]: 167 167
Feb 01 15:02:44 compute-0 systemd[1]: libpod-411cf56e4102c5306278bbe56755f6a9a1aa56ebfb3f4fa94a1fbaff2c777fbf.scope: Deactivated successfully.
Feb 01 15:02:44 compute-0 podman[194327]: 2026-02-01 15:02:44.953162516 +0000 UTC m=+0.148558223 container died 411cf56e4102c5306278bbe56755f6a9a1aa56ebfb3f4fa94a1fbaff2c777fbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:02:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9d484e6f7448fb5b86f659944dd18358f1ff4e04f2277f55474400e65601736-merged.mount: Deactivated successfully.
Feb 01 15:02:45 compute-0 podman[194327]: 2026-02-01 15:02:45.012738955 +0000 UTC m=+0.208134692 container remove 411cf56e4102c5306278bbe56755f6a9a1aa56ebfb3f4fa94a1fbaff2c777fbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_tharp, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:02:45 compute-0 systemd[1]: libpod-conmon-411cf56e4102c5306278bbe56755f6a9a1aa56ebfb3f4fa94a1fbaff2c777fbf.scope: Deactivated successfully.
Feb 01 15:02:45 compute-0 podman[194408]: 2026-02-01 15:02:45.176093622 +0000 UTC m=+0.062093801 container create 70a40779da98d4aaf90a2c0ab377a8ceaa096ef3d5cb94783380013647961aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_buck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 01 15:02:45 compute-0 systemd[1]: Started libpod-conmon-70a40779da98d4aaf90a2c0ab377a8ceaa096ef3d5cb94783380013647961aaf.scope.
Feb 01 15:02:45 compute-0 podman[194408]: 2026-02-01 15:02:45.148986543 +0000 UTC m=+0.034986812 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:02:45 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76f8e318d14036bff7a67e6ac3789cf6151235b915ad4a335577dd2822a1a7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76f8e318d14036bff7a67e6ac3789cf6151235b915ad4a335577dd2822a1a7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76f8e318d14036bff7a67e6ac3789cf6151235b915ad4a335577dd2822a1a7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76f8e318d14036bff7a67e6ac3789cf6151235b915ad4a335577dd2822a1a7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76f8e318d14036bff7a67e6ac3789cf6151235b915ad4a335577dd2822a1a7a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 15:02:45 compute-0 podman[194408]: 2026-02-01 15:02:45.295330983 +0000 UTC m=+0.181331232 container init 70a40779da98d4aaf90a2c0ab377a8ceaa096ef3d5cb94783380013647961aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_buck, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:02:45 compute-0 podman[194408]: 2026-02-01 15:02:45.310957971 +0000 UTC m=+0.196958150 container start 70a40779da98d4aaf90a2c0ab377a8ceaa096ef3d5cb94783380013647961aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:02:45 compute-0 podman[194408]: 2026-02-01 15:02:45.315521619 +0000 UTC m=+0.201521878 container attach 70a40779da98d4aaf90a2c0ab377a8ceaa096ef3d5cb94783380013647961aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_buck, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 01 15:02:45 compute-0 sudo[194513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfujcbqyfnkyyzudjnnnstwjiucfyxql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958165.0597897-506-87344738984584/AnsiballZ_file.py'
Feb 01 15:02:45 compute-0 sudo[194513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:02:45 compute-0 python3.9[194515]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:02:45 compute-0 sudo[194513]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:45 compute-0 reverent_buck[194458]: --> passed data devices: 0 physical, 3 LVM
Feb 01 15:02:45 compute-0 reverent_buck[194458]: --> All data devices are unavailable
Feb 01 15:02:45 compute-0 systemd[1]: libpod-70a40779da98d4aaf90a2c0ab377a8ceaa096ef3d5cb94783380013647961aaf.scope: Deactivated successfully.
Feb 01 15:02:45 compute-0 podman[194408]: 2026-02-01 15:02:45.792893424 +0000 UTC m=+0.678893603 container died 70a40779da98d4aaf90a2c0ab377a8ceaa096ef3d5cb94783380013647961aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:02:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-a76f8e318d14036bff7a67e6ac3789cf6151235b915ad4a335577dd2822a1a7a-merged.mount: Deactivated successfully.
Feb 01 15:02:45 compute-0 podman[194408]: 2026-02-01 15:02:45.832504424 +0000 UTC m=+0.718504613 container remove 70a40779da98d4aaf90a2c0ab377a8ceaa096ef3d5cb94783380013647961aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:02:45 compute-0 systemd[1]: libpod-conmon-70a40779da98d4aaf90a2c0ab377a8ceaa096ef3d5cb94783380013647961aaf.scope: Deactivated successfully.
Feb 01 15:02:45 compute-0 sudo[194262]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:45 compute-0 sudo[194666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:02:45 compute-0 sudo[194666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:02:45 compute-0 sudo[194715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnmhnvhqpcwzyvizszyauauazfmsbcbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958165.6799932-506-102007990637868/AnsiballZ_file.py'
Feb 01 15:02:45 compute-0 sudo[194666]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:45 compute-0 sudo[194715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:45 compute-0 sudo[194720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 15:02:45 compute-0 sudo[194720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:02:45 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:46 compute-0 python3.9[194719]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:02:46 compute-0 sudo[194715]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:46 compute-0 podman[194763]: 2026-02-01 15:02:46.190609068 +0000 UTC m=+0.031862124 container create c581d37a764dcd253420a3354f91266c67d7e680693e83e6bd08a16738c775f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 01 15:02:46 compute-0 systemd[1]: Started libpod-conmon-c581d37a764dcd253420a3354f91266c67d7e680693e83e6bd08a16738c775f5.scope.
Feb 01 15:02:46 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:02:46 compute-0 podman[194763]: 2026-02-01 15:02:46.254257321 +0000 UTC m=+0.095510407 container init c581d37a764dcd253420a3354f91266c67d7e680693e83e6bd08a16738c775f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Feb 01 15:02:46 compute-0 podman[194763]: 2026-02-01 15:02:46.259090266 +0000 UTC m=+0.100343312 container start c581d37a764dcd253420a3354f91266c67d7e680693e83e6bd08a16738c775f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 01 15:02:46 compute-0 awesome_nobel[194806]: 167 167
Feb 01 15:02:46 compute-0 systemd[1]: libpod-c581d37a764dcd253420a3354f91266c67d7e680693e83e6bd08a16738c775f5.scope: Deactivated successfully.
Feb 01 15:02:46 compute-0 podman[194763]: 2026-02-01 15:02:46.26528393 +0000 UTC m=+0.106536996 container attach c581d37a764dcd253420a3354f91266c67d7e680693e83e6bd08a16738c775f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 01 15:02:46 compute-0 podman[194763]: 2026-02-01 15:02:46.265599219 +0000 UTC m=+0.106852275 container died c581d37a764dcd253420a3354f91266c67d7e680693e83e6bd08a16738c775f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:02:46 compute-0 podman[194763]: 2026-02-01 15:02:46.176021489 +0000 UTC m=+0.017274565 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:02:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-0eb434392060e30875fb31050998934718d79084d3fbba4bd767d8b942f678f8-merged.mount: Deactivated successfully.
Feb 01 15:02:46 compute-0 podman[194763]: 2026-02-01 15:02:46.326534206 +0000 UTC m=+0.167787262 container remove c581d37a764dcd253420a3354f91266c67d7e680693e83e6bd08a16738c775f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:02:46 compute-0 systemd[1]: libpod-conmon-c581d37a764dcd253420a3354f91266c67d7e680693e83e6bd08a16738c775f5.scope: Deactivated successfully.
Feb 01 15:02:46 compute-0 podman[194903]: 2026-02-01 15:02:46.434176172 +0000 UTC m=+0.034512518 container create c2151c42462ac051a3d221018a933fdd64e3b687a13aa7cbe694bba622e78594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_sutherland, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 01 15:02:46 compute-0 systemd[1]: Started libpod-conmon-c2151c42462ac051a3d221018a933fdd64e3b687a13aa7cbe694bba622e78594.scope.
Feb 01 15:02:46 compute-0 sudo[194964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzsczfeunqcucgytstdcddkrbhewyuaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958166.239504-506-279297271861750/AnsiballZ_file.py'
Feb 01 15:02:46 compute-0 sudo[194964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:46 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:02:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38479ad6c478f8c25edfb0ed616e062d3a3e7962c948b5bfdfafe8b7ef408c2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:02:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38479ad6c478f8c25edfb0ed616e062d3a3e7962c948b5bfdfafe8b7ef408c2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:02:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38479ad6c478f8c25edfb0ed616e062d3a3e7962c948b5bfdfafe8b7ef408c2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:02:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38479ad6c478f8c25edfb0ed616e062d3a3e7962c948b5bfdfafe8b7ef408c2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:02:46 compute-0 podman[194903]: 2026-02-01 15:02:46.509880813 +0000 UTC m=+0.110217249 container init c2151c42462ac051a3d221018a933fdd64e3b687a13aa7cbe694bba622e78594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_sutherland, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:02:46 compute-0 podman[194903]: 2026-02-01 15:02:46.417916437 +0000 UTC m=+0.018252803 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:02:46 compute-0 podman[194903]: 2026-02-01 15:02:46.514437941 +0000 UTC m=+0.114774287 container start c2151c42462ac051a3d221018a933fdd64e3b687a13aa7cbe694bba622e78594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 01 15:02:46 compute-0 podman[194903]: 2026-02-01 15:02:46.521725015 +0000 UTC m=+0.122061371 container attach c2151c42462ac051a3d221018a933fdd64e3b687a13aa7cbe694bba622e78594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_sutherland, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 01 15:02:46 compute-0 python3.9[194969]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:02:46 compute-0 sudo[194964]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]: {
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:     "0": [
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:         {
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "devices": [
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "/dev/loop3"
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             ],
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "lv_name": "ceph_lv0",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "lv_size": "21470642176",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "name": "ceph_lv0",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "tags": {
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.cluster_name": "ceph",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.crush_device_class": "",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.encrypted": "0",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.objectstore": "bluestore",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.osd_id": "0",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.type": "block",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.vdo": "0",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.with_tpm": "0"
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             },
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "type": "block",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "vg_name": "ceph_vg0"
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:         }
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:     ],
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:     "1": [
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:         {
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "devices": [
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "/dev/loop4"
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             ],
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "lv_name": "ceph_lv1",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "lv_size": "21470642176",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "name": "ceph_lv1",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "tags": {
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.cluster_name": "ceph",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.crush_device_class": "",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.encrypted": "0",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.objectstore": "bluestore",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.osd_id": "1",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.type": "block",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.vdo": "0",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.with_tpm": "0"
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             },
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "type": "block",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "vg_name": "ceph_vg1"
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:         }
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:     ],
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:     "2": [
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:         {
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "devices": [
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "/dev/loop5"
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             ],
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "lv_name": "ceph_lv2",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "lv_size": "21470642176",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "name": "ceph_lv2",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "tags": {
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.cluster_name": "ceph",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.crush_device_class": "",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.encrypted": "0",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.objectstore": "bluestore",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.osd_id": "2",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.type": "block",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.vdo": "0",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:                 "ceph.with_tpm": "0"
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             },
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "type": "block",
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:             "vg_name": "ceph_vg2"
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:         }
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]:     ]
Feb 01 15:02:46 compute-0 goofy_sutherland[194966]: }
Feb 01 15:02:46 compute-0 systemd[1]: libpod-c2151c42462ac051a3d221018a933fdd64e3b687a13aa7cbe694bba622e78594.scope: Deactivated successfully.
Feb 01 15:02:46 compute-0 podman[194903]: 2026-02-01 15:02:46.820873897 +0000 UTC m=+0.421210273 container died c2151c42462ac051a3d221018a933fdd64e3b687a13aa7cbe694bba622e78594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_sutherland, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:02:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-38479ad6c478f8c25edfb0ed616e062d3a3e7962c948b5bfdfafe8b7ef408c2a-merged.mount: Deactivated successfully.
Feb 01 15:02:46 compute-0 podman[194903]: 2026-02-01 15:02:46.858049979 +0000 UTC m=+0.458386335 container remove c2151c42462ac051a3d221018a933fdd64e3b687a13aa7cbe694bba622e78594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_sutherland, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:02:46 compute-0 systemd[1]: libpod-conmon-c2151c42462ac051a3d221018a933fdd64e3b687a13aa7cbe694bba622e78594.scope: Deactivated successfully.
Feb 01 15:02:46 compute-0 sudo[194720]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:46 compute-0 sudo[195086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:02:46 compute-0 sudo[195086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:02:46 compute-0 sudo[195086]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:46 compute-0 sudo[195132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 15:02:47 compute-0 sudo[195132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:02:47 compute-0 sudo[195186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxhapnljbsglefibbduevqkunagxojpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958166.7790995-506-202230691901002/AnsiballZ_file.py'
Feb 01 15:02:47 compute-0 sudo[195186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:47 compute-0 ceph-mon[75179]: pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:47 compute-0 podman[195203]: 2026-02-01 15:02:47.236748499 +0000 UTC m=+0.032388588 container create 34199ecf8fc2211536a33479bb84376168b101961831d5dd293090601f824cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Feb 01 15:02:47 compute-0 python3.9[195188]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:02:47 compute-0 systemd[1]: Started libpod-conmon-34199ecf8fc2211536a33479bb84376168b101961831d5dd293090601f824cb9.scope.
Feb 01 15:02:47 compute-0 sudo[195186]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:47 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:02:47 compute-0 podman[195203]: 2026-02-01 15:02:47.309141368 +0000 UTC m=+0.104781467 container init 34199ecf8fc2211536a33479bb84376168b101961831d5dd293090601f824cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:02:47 compute-0 podman[195203]: 2026-02-01 15:02:47.315680871 +0000 UTC m=+0.111320940 container start 34199ecf8fc2211536a33479bb84376168b101961831d5dd293090601f824cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_colden, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 01 15:02:47 compute-0 nostalgic_colden[195220]: 167 167
Feb 01 15:02:47 compute-0 podman[195203]: 2026-02-01 15:02:47.223866358 +0000 UTC m=+0.019506437 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:02:47 compute-0 systemd[1]: libpod-34199ecf8fc2211536a33479bb84376168b101961831d5dd293090601f824cb9.scope: Deactivated successfully.
Feb 01 15:02:47 compute-0 podman[195203]: 2026-02-01 15:02:47.320413053 +0000 UTC m=+0.116053142 container attach 34199ecf8fc2211536a33479bb84376168b101961831d5dd293090601f824cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 01 15:02:47 compute-0 podman[195203]: 2026-02-01 15:02:47.321036491 +0000 UTC m=+0.116676590 container died 34199ecf8fc2211536a33479bb84376168b101961831d5dd293090601f824cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:02:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-568e48b9b41534189804aea9e9676e53116caa5a5e52fcf45bb1ac28ce869dcd-merged.mount: Deactivated successfully.
Feb 01 15:02:47 compute-0 podman[195203]: 2026-02-01 15:02:47.359612562 +0000 UTC m=+0.155252661 container remove 34199ecf8fc2211536a33479bb84376168b101961831d5dd293090601f824cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:02:47 compute-0 systemd[1]: libpod-conmon-34199ecf8fc2211536a33479bb84376168b101961831d5dd293090601f824cb9.scope: Deactivated successfully.
Feb 01 15:02:47 compute-0 podman[195300]: 2026-02-01 15:02:47.51766379 +0000 UTC m=+0.068015617 container create e503b1db29ddf9517535eb12fe6f02191793ceb765a89de55a0053285c6ec9e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 01 15:02:47 compute-0 systemd[1]: Started libpod-conmon-e503b1db29ddf9517535eb12fe6f02191793ceb765a89de55a0053285c6ec9e4.scope.
Feb 01 15:02:47 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:02:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1ed91bf2e79d53d6c6bbc5cbfabeced1e698a66e958310310e2d186d0c379c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:02:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1ed91bf2e79d53d6c6bbc5cbfabeced1e698a66e958310310e2d186d0c379c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:02:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1ed91bf2e79d53d6c6bbc5cbfabeced1e698a66e958310310e2d186d0c379c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:02:47 compute-0 podman[195300]: 2026-02-01 15:02:47.495598292 +0000 UTC m=+0.045950159 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:02:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1ed91bf2e79d53d6c6bbc5cbfabeced1e698a66e958310310e2d186d0c379c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:02:47 compute-0 podman[195300]: 2026-02-01 15:02:47.615582054 +0000 UTC m=+0.165933921 container init e503b1db29ddf9517535eb12fe6f02191793ceb765a89de55a0053285c6ec9e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_dubinsky, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 01 15:02:47 compute-0 podman[195300]: 2026-02-01 15:02:47.621897571 +0000 UTC m=+0.172249428 container start e503b1db29ddf9517535eb12fe6f02191793ceb765a89de55a0053285c6ec9e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_dubinsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 01 15:02:47 compute-0 podman[195300]: 2026-02-01 15:02:47.627394465 +0000 UTC m=+0.177746332 container attach e503b1db29ddf9517535eb12fe6f02191793ceb765a89de55a0053285c6ec9e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_dubinsky, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:02:47 compute-0 sudo[195418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgozmfjbyboctmimxyzqwafvtrsnadgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958167.400242-506-49411201931990/AnsiballZ_file.py'
Feb 01 15:02:47 compute-0 sudo[195418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:47 compute-0 python3.9[195420]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:02:47 compute-0 sudo[195418]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:47 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:02:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:02:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:02:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:02:48 compute-0 lvm[195615]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:02:48 compute-0 lvm[195615]: VG ceph_vg0 finished
Feb 01 15:02:48 compute-0 lvm[195619]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:02:48 compute-0 lvm[195619]: VG ceph_vg1 finished
Feb 01 15:02:48 compute-0 sudo[195646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbsbxvnuwpcpqzmtbljydafzlinidnrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958168.2877102-506-21293728072641/AnsiballZ_file.py'
Feb 01 15:02:48 compute-0 sudo[195646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:48 compute-0 lvm[195647]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:02:48 compute-0 lvm[195647]: VG ceph_vg2 finished
Feb 01 15:02:48 compute-0 friendly_dubinsky[195369]: {}
Feb 01 15:02:48 compute-0 systemd[1]: libpod-e503b1db29ddf9517535eb12fe6f02191793ceb765a89de55a0053285c6ec9e4.scope: Deactivated successfully.
Feb 01 15:02:48 compute-0 systemd[1]: libpod-e503b1db29ddf9517535eb12fe6f02191793ceb765a89de55a0053285c6ec9e4.scope: Consumed 1.685s CPU time.
Feb 01 15:02:48 compute-0 podman[195300]: 2026-02-01 15:02:48.695365557 +0000 UTC m=+1.245717404 container died e503b1db29ddf9517535eb12fe6f02191793ceb765a89de55a0053285c6ec9e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:02:48 compute-0 python3.9[195649]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:02:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:02:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:02:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca1ed91bf2e79d53d6c6bbc5cbfabeced1e698a66e958310310e2d186d0c379c-merged.mount: Deactivated successfully.
Feb 01 15:02:48 compute-0 sudo[195646]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:48 compute-0 podman[195300]: 2026-02-01 15:02:48.75152573 +0000 UTC m=+1.301877547 container remove e503b1db29ddf9517535eb12fe6f02191793ceb765a89de55a0053285c6ec9e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:02:48 compute-0 systemd[1]: libpod-conmon-e503b1db29ddf9517535eb12fe6f02191793ceb765a89de55a0053285c6ec9e4.scope: Deactivated successfully.
Feb 01 15:02:48 compute-0 sudo[195132]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:48 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:02:48 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:02:48 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:02:48 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:02:48 compute-0 sudo[195688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 15:02:48 compute-0 sudo[195688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:02:48 compute-0 sudo[195688]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:49 compute-0 ceph-mon[75179]: pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:49 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:02:49 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:02:49 compute-0 python3.9[195838]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 15:02:49 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:50 compute-0 sudo[195988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecajguwbnbrxantjldjekeggtgpbpaen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958169.6471763-557-93934907013417/AnsiballZ_stat.py'
Feb 01 15:02:50 compute-0 sudo[195988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:50 compute-0 python3.9[195990]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:02:50 compute-0 sudo[195988]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:50 compute-0 ceph-mon[75179]: pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:02:50 compute-0 sudo[196113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obccsvbizjzhujaooaauvtubtbqtajtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958169.6471763-557-93934907013417/AnsiballZ_copy.py'
Feb 01 15:02:50 compute-0 sudo[196113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:51 compute-0 python3.9[196115]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769958169.6471763-557-93934907013417/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:02:51 compute-0 sudo[196113]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:51 compute-0 sudo[196265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iehxvdecjpyhvqgxsxhaspaijpkfjghs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958171.2153761-557-263758877874439/AnsiballZ_stat.py'
Feb 01 15:02:51 compute-0 sudo[196265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:51 compute-0 python3.9[196267]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:02:51 compute-0 sudo[196265]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:52 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:52 compute-0 sudo[196390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptkrwfbmqefetpndupaxxfobwvfcsnak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958171.2153761-557-263758877874439/AnsiballZ_copy.py'
Feb 01 15:02:52 compute-0 sudo[196390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:52 compute-0 python3.9[196392]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769958171.2153761-557-263758877874439/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:02:52 compute-0 sudo[196390]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:52 compute-0 sudo[196542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxsxrwsdttrtwjvkseokzeflaydovqmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958172.352008-557-181641515215162/AnsiballZ_stat.py'
Feb 01 15:02:52 compute-0 sudo[196542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:52 compute-0 python3.9[196544]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:02:52 compute-0 sudo[196542]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:53 compute-0 ceph-mon[75179]: pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:53 compute-0 sudo[196667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvqgdvtzkzmetpokoltbjlhaefbbfqkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958172.352008-557-181641515215162/AnsiballZ_copy.py'
Feb 01 15:02:53 compute-0 sudo[196667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:53 compute-0 python3.9[196669]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769958172.352008-557-181641515215162/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:02:53 compute-0 sudo[196667]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:53 compute-0 sudo[196819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuanyyigsrwkwmtsaupohegrgcxyqwgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958173.412101-557-190225944031075/AnsiballZ_stat.py'
Feb 01 15:02:53 compute-0 sudo[196819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:53 compute-0 python3.9[196821]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:02:53 compute-0 sudo[196819]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:54 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:54 compute-0 sudo[196944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbzgyyvdzepntjznsukvsnntqsakyyas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958173.412101-557-190225944031075/AnsiballZ_copy.py'
Feb 01 15:02:54 compute-0 sudo[196944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:54 compute-0 python3.9[196946]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769958173.412101-557-190225944031075/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:02:54 compute-0 sudo[196944]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:54 compute-0 sudo[197096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvguuzhydgojxkqoskalqtyppbktzzvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958174.6263795-557-21609612603065/AnsiballZ_stat.py'
Feb 01 15:02:54 compute-0 sudo[197096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:55 compute-0 ceph-mon[75179]: pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:55 compute-0 python3.9[197098]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:02:55 compute-0 sudo[197096]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:55 compute-0 sudo[197221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eitlfwibkqsxsajjcsrvozkhqmisfrjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958174.6263795-557-21609612603065/AnsiballZ_copy.py'
Feb 01 15:02:55 compute-0 sudo[197221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:02:55 compute-0 python3.9[197223]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769958174.6263795-557-21609612603065/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:02:55 compute-0 sudo[197221]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:56 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:56 compute-0 sudo[197373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlxwoeqifiygolvgnibrxzyagogjijdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958175.7613988-557-189555856095843/AnsiballZ_stat.py'
Feb 01 15:02:56 compute-0 sudo[197373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:56 compute-0 python3.9[197375]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:02:56 compute-0 sudo[197373]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:56 compute-0 sudo[197498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldlbaqvcgnngmwjmteqhzqbnfbkxueps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958175.7613988-557-189555856095843/AnsiballZ_copy.py'
Feb 01 15:02:56 compute-0 sudo[197498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:56 compute-0 python3.9[197500]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769958175.7613988-557-189555856095843/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:02:56 compute-0 sudo[197498]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:57 compute-0 ceph-mon[75179]: pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:57 compute-0 sudo[197650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oenusdislcutalaoegmyqoamxixtbire ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958177.0121386-557-239188825204576/AnsiballZ_stat.py'
Feb 01 15:02:57 compute-0 sudo[197650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:57 compute-0 python3.9[197652]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:02:57 compute-0 sudo[197650]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:57 compute-0 sudo[197773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdconqowwypmgcjwfxtertjbeaircdip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958177.0121386-557-239188825204576/AnsiballZ_copy.py'
Feb 01 15:02:57 compute-0 sudo[197773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:58 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:58 compute-0 python3.9[197775]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769958177.0121386-557-239188825204576/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:02:58 compute-0 sudo[197773]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:58 compute-0 sudo[197925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rirjipnxskerwreignhwbxsnjodmgujo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958178.165389-557-192249679704920/AnsiballZ_stat.py'
Feb 01 15:02:58 compute-0 sudo[197925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:58 compute-0 python3.9[197927]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:02:58 compute-0 sudo[197925]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:58 compute-0 sudo[198050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnlenazzbfgirohhcnbeswzrppdqahqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958178.165389-557-192249679704920/AnsiballZ_copy.py'
Feb 01 15:02:58 compute-0 sudo[198050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:59 compute-0 ceph-mon[75179]: pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:02:59 compute-0 python3.9[198052]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769958178.165389-557-192249679704920/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:02:59 compute-0 sudo[198050]: pam_unix(sudo:session): session closed for user root
Feb 01 15:02:59 compute-0 sudo[198202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cytylslyjwehbitgwxclzbqnxkkvfnyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958179.3815365-670-155186707746662/AnsiballZ_command.py'
Feb 01 15:02:59 compute-0 sudo[198202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:02:59 compute-0 python3.9[198204]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Feb 01 15:03:00 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:00 compute-0 sudo[198202]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:03:00 compute-0 sudo[198355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkdfqprtggxfvwtzqhmkpsobykscxaiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958180.2813277-679-32108765510991/AnsiballZ_file.py'
Feb 01 15:03:00 compute-0 sudo[198355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:00 compute-0 python3.9[198357]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:00 compute-0 sudo[198355]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:01 compute-0 ceph-mon[75179]: pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:01 compute-0 sudo[198507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvyrrhzdwqswjetqnztwkvzbmesdfdvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958180.916554-679-52517089543333/AnsiballZ_file.py'
Feb 01 15:03:01 compute-0 sudo[198507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:01 compute-0 python3.9[198509]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:01 compute-0 sudo[198507]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:01 compute-0 sudo[198659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdbfniozimpmtdtpxbtnigqdjkiwkthw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958181.4850376-679-228095290183797/AnsiballZ_file.py'
Feb 01 15:03:01 compute-0 sudo[198659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:01 compute-0 python3.9[198661]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:01 compute-0 sudo[198659]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:02 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:02 compute-0 sudo[198811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-locyfaamxiogqevookkndcgggngwyjsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958182.1242948-679-109202220493753/AnsiballZ_file.py'
Feb 01 15:03:02 compute-0 sudo[198811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:02 compute-0 python3.9[198813]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:02 compute-0 sudo[198811]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:03 compute-0 sudo[198963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmbvjzdhnnvvfndhhlceiqbcsxkbluta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958182.7692971-679-181248210324441/AnsiballZ_file.py'
Feb 01 15:03:03 compute-0 sudo[198963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:03 compute-0 ceph-mon[75179]: pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:03 compute-0 python3.9[198965]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:03 compute-0 sudo[198963]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:03 compute-0 sudo[199115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htnsxuuiyjhernrmuxhisajvtbrcfudq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958183.3836024-679-57142602789466/AnsiballZ_file.py'
Feb 01 15:03:03 compute-0 sudo[199115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:03 compute-0 python3.9[199117]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:03 compute-0 sudo[199115]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:04 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:04 compute-0 sudo[199267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aigqjjhbixckwilzrvvcvdmzzhflhwat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958183.9452224-679-32046747302271/AnsiballZ_file.py'
Feb 01 15:03:04 compute-0 sudo[199267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:04 compute-0 python3.9[199269]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:04 compute-0 sudo[199267]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:04 compute-0 sudo[199419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgqufgngunfvmxmxyesyixmjpoksoewl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958184.4715192-679-191513400087202/AnsiballZ_file.py'
Feb 01 15:03:04 compute-0 sudo[199419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:04 compute-0 python3.9[199421]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:04 compute-0 sudo[199419]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:05 compute-0 ceph-mon[75179]: pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:05 compute-0 sudo[199571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pswualqjfbivrcuquxolnidepxkcixhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958184.967318-679-222345293573942/AnsiballZ_file.py'
Feb 01 15:03:05 compute-0 sudo[199571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:05 compute-0 python3.9[199573]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:05 compute-0 sudo[199571]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:03:05 compute-0 sudo[199723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxdwnrobtwohajjnjnxuzmrianadrzxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958185.567526-679-17385769452031/AnsiballZ_file.py'
Feb 01 15:03:05 compute-0 sudo[199723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:06 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:06 compute-0 python3.9[199725]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:06 compute-0 sudo[199723]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:06 compute-0 sudo[199875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpcltyepbiofyphdlfvqevgzmixlvtsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958186.262081-679-89245126144572/AnsiballZ_file.py'
Feb 01 15:03:06 compute-0 sudo[199875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:06 compute-0 python3.9[199877]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:06 compute-0 sudo[199875]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:07 compute-0 sudo[200027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twqqixtvnqwvzuozwojuykbiavztayqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958187.0326502-679-57521037390004/AnsiballZ_file.py'
Feb 01 15:03:07 compute-0 sudo[200027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:07 compute-0 ceph-mon[75179]: pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:07 compute-0 python3.9[200029]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:07 compute-0 sudo[200027]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:03:07.794 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:03:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:03:07.796 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:03:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:03:07.796 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:03:07 compute-0 sudo[200179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jobbzqjgewpliqgpmvgyxzsaltivcudq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958187.6637514-679-63701342314007/AnsiballZ_file.py'
Feb 01 15:03:07 compute-0 sudo[200179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:08 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:08 compute-0 python3.9[200181]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:08 compute-0 sudo[200179]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:08 compute-0 sudo[200331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djrjbkbvwiwrkjgepaqfdygujhqdmkfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958188.2648294-679-252197158019244/AnsiballZ_file.py'
Feb 01 15:03:08 compute-0 sudo[200331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:08 compute-0 python3.9[200333]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:08 compute-0 sudo[200331]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:09 compute-0 sudo[200483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpomvppqrjjjyuusklzmeyckzpzxckwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958188.8937201-778-10792988600172/AnsiballZ_stat.py'
Feb 01 15:03:09 compute-0 sudo[200483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:09 compute-0 ceph-mon[75179]: pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:09 compute-0 python3.9[200485]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:03:09 compute-0 sudo[200483]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:09 compute-0 sudo[200631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dklzkcapsbrsktxqurdzzdefhttdebqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958188.8937201-778-10792988600172/AnsiballZ_copy.py'
Feb 01 15:03:09 compute-0 sudo[200631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:09 compute-0 podman[200580]: 2026-02-01 15:03:09.774164114 +0000 UTC m=+0.089736166 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 01 15:03:09 compute-0 podman[200581]: 2026-02-01 15:03:09.802014634 +0000 UTC m=+0.117249866 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3)
Feb 01 15:03:09 compute-0 python3.9[200643]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958188.8937201-778-10792988600172/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:09 compute-0 sudo[200631]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:10 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:10 compute-0 sudo[200802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnrzwkiadlcicvunzfskdqzasjtalmie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958190.082023-778-42701562262123/AnsiballZ_stat.py'
Feb 01 15:03:10 compute-0 sudo[200802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:10 compute-0 ceph-mon[75179]: pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:03:10 compute-0 python3.9[200804]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:03:10 compute-0 sudo[200802]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:10 compute-0 sudo[200925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkmntpmtlgvamhvqlzuvwihiaqgdzjej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958190.082023-778-42701562262123/AnsiballZ_copy.py'
Feb 01 15:03:10 compute-0 sudo[200925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:11 compute-0 python3.9[200927]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958190.082023-778-42701562262123/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:11 compute-0 sudo[200925]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:11 compute-0 sudo[201077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsihftcahnlitgnmdaugvsdggcfngtpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958191.2974386-778-154936042616494/AnsiballZ_stat.py'
Feb 01 15:03:11 compute-0 sudo[201077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:11 compute-0 python3.9[201079]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:03:11 compute-0 sudo[201077]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:12 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:12 compute-0 sudo[201200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omvemlzribhvgvndgtorleymuvyxlztc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958191.2974386-778-154936042616494/AnsiballZ_copy.py'
Feb 01 15:03:12 compute-0 sudo[201200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:12 compute-0 python3.9[201202]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958191.2974386-778-154936042616494/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:12 compute-0 sudo[201200]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:12 compute-0 sudo[201352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ankjnpqrorvusmnklngrspmzmiseuost ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958192.4440532-778-177647756558666/AnsiballZ_stat.py'
Feb 01 15:03:12 compute-0 sudo[201352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:12 compute-0 python3.9[201354]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:03:12 compute-0 sudo[201352]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:13 compute-0 ceph-mon[75179]: pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:13 compute-0 sudo[201475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmimexyianhnrxqdwlzgxgxwrpmudqlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958192.4440532-778-177647756558666/AnsiballZ_copy.py'
Feb 01 15:03:13 compute-0 sudo[201475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:13 compute-0 python3.9[201477]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958192.4440532-778-177647756558666/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:13 compute-0 sudo[201475]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:13 compute-0 sudo[201627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqplvizmgmbnlizjgujubqbbhbkliyrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958193.576037-778-180287320938589/AnsiballZ_stat.py'
Feb 01 15:03:13 compute-0 sudo[201627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:13 compute-0 python3.9[201629]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:03:13 compute-0 sudo[201627]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:14 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:14 compute-0 sudo[201750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teqbganxsskxnrkjoasrgonnlwisyqeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958193.576037-778-180287320938589/AnsiballZ_copy.py'
Feb 01 15:03:14 compute-0 sudo[201750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:14 compute-0 python3.9[201752]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958193.576037-778-180287320938589/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:14 compute-0 sudo[201750]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:15 compute-0 ceph-mon[75179]: pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:15 compute-0 sudo[201902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svrudaiwggdevktdnntpfjcufscfpqxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958194.9049804-778-128318470169057/AnsiballZ_stat.py'
Feb 01 15:03:15 compute-0 sudo[201902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:15 compute-0 python3.9[201904]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:03:15 compute-0 sudo[201902]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:03:15 compute-0 sudo[202025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjmfyzkdqlvqbymeupuxwwjfrcxanxyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958194.9049804-778-128318470169057/AnsiballZ_copy.py'
Feb 01 15:03:15 compute-0 sudo[202025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:15 compute-0 python3.9[202027]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958194.9049804-778-128318470169057/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:15 compute-0 sudo[202025]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:16 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:16 compute-0 sudo[202177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfhwghitrrlwvhllussbxnrdfrfpxifc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958196.0187542-778-23688930473764/AnsiballZ_stat.py'
Feb 01 15:03:16 compute-0 sudo[202177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:16 compute-0 python3.9[202179]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:03:16 compute-0 sudo[202177]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:16 compute-0 sudo[202300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iydwqscpluigbfgsfnzrzyixaozwmjkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958196.0187542-778-23688930473764/AnsiballZ_copy.py'
Feb 01 15:03:16 compute-0 sudo[202300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:17 compute-0 python3.9[202302]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958196.0187542-778-23688930473764/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:17 compute-0 sudo[202300]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:17 compute-0 ceph-mon[75179]: pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:17 compute-0 sudo[202452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpdskwszkjdbqbtroxvnmqprgdnmluhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958197.192349-778-230536124195852/AnsiballZ_stat.py'
Feb 01 15:03:17 compute-0 sudo[202452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:17 compute-0 python3.9[202454]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:03:17 compute-0 sudo[202452]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:03:17
Feb 01 15:03:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:03:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:03:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.control', '.rgw.root', 'backups', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'volumes']
Feb 01 15:03:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:03:18 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:18 compute-0 sudo[202575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utqcvqkwwgfudshdezcxzzotajcqifes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958197.192349-778-230536124195852/AnsiballZ_copy.py'
Feb 01 15:03:18 compute-0 sudo[202575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:18 compute-0 python3.9[202577]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958197.192349-778-230536124195852/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:18 compute-0 sudo[202575]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:03:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:03:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:03:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:03:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:03:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:03:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:03:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:03:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:03:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:03:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:03:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:03:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:03:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:03:18 compute-0 sudo[202727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idgtysqnvsilrowigratlvjqyvpjwwnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958198.3556695-778-38913876432163/AnsiballZ_stat.py'
Feb 01 15:03:18 compute-0 sudo[202727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:03:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:03:18 compute-0 python3.9[202729]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:03:18 compute-0 sudo[202727]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:19 compute-0 ceph-mon[75179]: pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:19 compute-0 sudo[202850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivrjmetqczewbugbufkdwpuyotqhnfgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958198.3556695-778-38913876432163/AnsiballZ_copy.py'
Feb 01 15:03:19 compute-0 sudo[202850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:19 compute-0 python3.9[202852]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958198.3556695-778-38913876432163/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:19 compute-0 sudo[202850]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:19 compute-0 sudo[203002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgazyitvyhdrsgywyoykhjmxmlcnwpug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958199.593427-778-128987776802921/AnsiballZ_stat.py'
Feb 01 15:03:19 compute-0 sudo[203002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:20 compute-0 python3.9[203004]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:03:20 compute-0 sudo[203002]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:20 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:20 compute-0 sudo[203125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvmhsulpicqhqlrfmdwoahfkzwcsulac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958199.593427-778-128987776802921/AnsiballZ_copy.py'
Feb 01 15:03:20 compute-0 sudo[203125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:03:20 compute-0 python3.9[203127]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958199.593427-778-128987776802921/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:20 compute-0 sudo[203125]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:20 compute-0 sudo[203277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moubjblqpatlqitydjxfgyqllcliccgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958200.6957617-778-67547785508081/AnsiballZ_stat.py'
Feb 01 15:03:20 compute-0 sudo[203277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:21 compute-0 ceph-mon[75179]: pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:21 compute-0 python3.9[203279]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:03:21 compute-0 sudo[203277]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:21 compute-0 sudo[203400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyhxahrxmebzhzbvedizuurulbpkdaxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958200.6957617-778-67547785508081/AnsiballZ_copy.py'
Feb 01 15:03:21 compute-0 sudo[203400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:21 compute-0 python3.9[203402]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958200.6957617-778-67547785508081/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:21 compute-0 sudo[203400]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:22 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:22 compute-0 sudo[203552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbxkbxlvihtoosquguhyajlrdtuvabhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958201.8375354-778-94216886314342/AnsiballZ_stat.py'
Feb 01 15:03:22 compute-0 sudo[203552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:22 compute-0 python3.9[203554]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:03:22 compute-0 sudo[203552]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:22 compute-0 sudo[203675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubhairaxgszaxwtxvbzxulbhxxefzmje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958201.8375354-778-94216886314342/AnsiballZ_copy.py'
Feb 01 15:03:22 compute-0 sudo[203675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:22 compute-0 python3.9[203677]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958201.8375354-778-94216886314342/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:22 compute-0 sudo[203675]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:23 compute-0 ceph-mon[75179]: pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:23 compute-0 sudo[203827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnpzbtgitygvnfpxfbpymtdvpxnavsph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958202.9457405-778-189419772073890/AnsiballZ_stat.py'
Feb 01 15:03:23 compute-0 sudo[203827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:23 compute-0 python3.9[203829]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:03:23 compute-0 sudo[203827]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:23 compute-0 sudo[203950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faiuygfutgmkzsohvshnovakasxfgzlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958202.9457405-778-189419772073890/AnsiballZ_copy.py'
Feb 01 15:03:23 compute-0 sudo[203950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:24 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:24 compute-0 python3.9[203952]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958202.9457405-778-189419772073890/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:24 compute-0 sudo[203950]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:24 compute-0 sudo[204102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thqhtdchqxzxzzwhgbvxxiegcgtqvspz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958204.1654148-778-91626845226628/AnsiballZ_stat.py'
Feb 01 15:03:24 compute-0 sudo[204102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:24 compute-0 python3.9[204104]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:03:24 compute-0 sudo[204102]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:25 compute-0 sudo[204225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqejwnozdkeivodvzmleliewajdyrxpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958204.1654148-778-91626845226628/AnsiballZ_copy.py'
Feb 01 15:03:25 compute-0 sudo[204225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:25 compute-0 ceph-mon[75179]: pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:25 compute-0 python3.9[204227]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958204.1654148-778-91626845226628/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:25 compute-0 sudo[204225]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:03:25 compute-0 python3.9[204377]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:03:26 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:26 compute-0 sudo[204530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtcgxczoewvjuqhpctmupoknxnbkyekn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958206.1089554-984-28080859484456/AnsiballZ_seboolean.py'
Feb 01 15:03:26 compute-0 sudo[204530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:26 compute-0 python3.9[204532]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Feb 01 15:03:27 compute-0 ceph-mon[75179]: pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:27 compute-0 sudo[204530]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:03:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:03:28 compute-0 sudo[204686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wenojzhcznvjxxkbpbrysqdxweqrjqge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958207.9553483-992-184891604667842/AnsiballZ_copy.py'
Feb 01 15:03:28 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Feb 01 15:03:28 compute-0 sudo[204686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:28 compute-0 python3.9[204688]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:28 compute-0 sudo[204686]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:28 compute-0 sudo[204838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owaiknqfquoubdjtghqgtzgobyqhfgsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958208.664553-992-47452663106083/AnsiballZ_copy.py'
Feb 01 15:03:28 compute-0 sudo[204838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:29 compute-0 python3.9[204840]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:29 compute-0 sudo[204838]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:29 compute-0 ceph-mon[75179]: pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:29 compute-0 sudo[204990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpyyrdcifcixceuenyuziydjqptddghz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958209.2822597-992-13081129728097/AnsiballZ_copy.py'
Feb 01 15:03:29 compute-0 sudo[204990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:29 compute-0 python3.9[204992]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:29 compute-0 sudo[204990]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:30 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:30 compute-0 sudo[205142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiogaxmcrmrzztkuhznbbwzcctxuqpei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958209.869929-992-99092020408247/AnsiballZ_copy.py'
Feb 01 15:03:30 compute-0 sudo[205142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:30 compute-0 python3.9[205144]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:30 compute-0 sudo[205142]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:30 compute-0 auditd[701]: Audit daemon rotating log files
Feb 01 15:03:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:03:30 compute-0 sudo[205294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyrvnggoewjdwoggcutjfwidqdfondpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958210.4114769-992-12971830300146/AnsiballZ_copy.py'
Feb 01 15:03:30 compute-0 sudo[205294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:30 compute-0 python3.9[205296]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:30 compute-0 sudo[205294]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:31 compute-0 ceph-mon[75179]: pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:31 compute-0 sudo[205446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhdktogkkoofrwajtpocpeonnwxpwhka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958211.0036721-1028-262977001730778/AnsiballZ_copy.py'
Feb 01 15:03:31 compute-0 sudo[205446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:31 compute-0 python3.9[205448]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:31 compute-0 sudo[205446]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:32 compute-0 sudo[205598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyqvnwjthrjffpskrrqtiuzibftpfqwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958211.7679021-1028-249701134192773/AnsiballZ_copy.py'
Feb 01 15:03:32 compute-0 sudo[205598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:32 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:32 compute-0 python3.9[205600]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:32 compute-0 sudo[205598]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:32 compute-0 sudo[205750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiisdgwwbhpbraitkyrntrcmtpkuiqcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958212.4005642-1028-29541394466803/AnsiballZ_copy.py'
Feb 01 15:03:32 compute-0 sudo[205750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:32 compute-0 python3.9[205752]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:32 compute-0 sudo[205750]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:33 compute-0 ceph-mon[75179]: pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:33 compute-0 sudo[205902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghdwnjmqnkmwduzdazzygibqzjepdysj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958213.003216-1028-244724348070719/AnsiballZ_copy.py'
Feb 01 15:03:33 compute-0 sudo[205902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:33 compute-0 python3.9[205904]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:33 compute-0 sudo[205902]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:33 compute-0 sudo[206054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntgtwyxzifxplmwridnzdakmwdizdilz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958213.584072-1028-232291531218982/AnsiballZ_copy.py'
Feb 01 15:03:33 compute-0 sudo[206054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:34 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:34 compute-0 python3.9[206056]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:34 compute-0 sudo[206054]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:34 compute-0 sudo[206206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtctlnlaqwdwkoozkaaqthccrkrfgyzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958214.3031812-1064-176593955740844/AnsiballZ_systemd.py'
Feb 01 15:03:34 compute-0 sudo[206206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:34 compute-0 python3.9[206208]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 01 15:03:34 compute-0 systemd[1]: Reloading.
Feb 01 15:03:35 compute-0 systemd-sysv-generator[206240]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:03:35 compute-0 systemd-rc-local-generator[206236]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:03:35 compute-0 ceph-mon[75179]: pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:35 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Feb 01 15:03:35 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Feb 01 15:03:35 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Feb 01 15:03:35 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Feb 01 15:03:35 compute-0 systemd[1]: Starting libvirt logging daemon...
Feb 01 15:03:35 compute-0 systemd[1]: Started libvirt logging daemon.
Feb 01 15:03:35 compute-0 sudo[206206]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:03:35 compute-0 sudo[206400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwmaofwqmximimpjcrldgaljmcqzwurr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958215.5305598-1064-4764913587547/AnsiballZ_systemd.py'
Feb 01 15:03:35 compute-0 sudo[206400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:36 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:36 compute-0 python3.9[206402]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 01 15:03:36 compute-0 systemd[1]: Reloading.
Feb 01 15:03:36 compute-0 systemd-sysv-generator[206433]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:03:36 compute-0 systemd-rc-local-generator[206430]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:03:36 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Feb 01 15:03:36 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Feb 01 15:03:36 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Feb 01 15:03:36 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Feb 01 15:03:36 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Feb 01 15:03:36 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Feb 01 15:03:36 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Feb 01 15:03:36 compute-0 systemd[1]: Started libvirt nodedev daemon.
Feb 01 15:03:36 compute-0 sudo[206400]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:36 compute-0 sudo[206616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edfehfvflbyupceitdysnygxordawwrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958216.6734736-1064-215531004881595/AnsiballZ_systemd.py'
Feb 01 15:03:36 compute-0 sudo[206616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:37 compute-0 ceph-mon[75179]: pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:37 compute-0 python3.9[206618]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 01 15:03:37 compute-0 systemd[1]: Reloading.
Feb 01 15:03:37 compute-0 systemd-rc-local-generator[206641]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:03:37 compute-0 systemd-sysv-generator[206644]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:03:37 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Feb 01 15:03:37 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Feb 01 15:03:37 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Feb 01 15:03:37 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Feb 01 15:03:37 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Feb 01 15:03:37 compute-0 systemd[1]: Starting libvirt proxy daemon...
Feb 01 15:03:37 compute-0 systemd[1]: Started libvirt proxy daemon.
Feb 01 15:03:37 compute-0 sudo[206616]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:37 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Feb 01 15:03:38 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:38 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Feb 01 15:03:38 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Feb 01 15:03:38 compute-0 sudo[206834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezzyzoiqkhkpdhwxgydqoqmerpiivlte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958217.890608-1064-104516564500960/AnsiballZ_systemd.py'
Feb 01 15:03:38 compute-0 sudo[206834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:38 compute-0 python3.9[206836]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 01 15:03:38 compute-0 systemd[1]: Reloading.
Feb 01 15:03:38 compute-0 systemd-rc-local-generator[206861]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:03:38 compute-0 systemd-sysv-generator[206864]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:03:38 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Feb 01 15:03:38 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Feb 01 15:03:38 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 01 15:03:38 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Feb 01 15:03:38 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Feb 01 15:03:38 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Feb 01 15:03:38 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Feb 01 15:03:38 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Feb 01 15:03:38 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Feb 01 15:03:38 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Feb 01 15:03:38 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Feb 01 15:03:38 compute-0 systemd[1]: Started libvirt QEMU daemon.
Feb 01 15:03:38 compute-0 sudo[206834]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:39 compute-0 setroubleshoot[206654]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 18fce6ee-04e5-42cf-97df-eb8e56d9670c
Feb 01 15:03:39 compute-0 setroubleshoot[206654]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Feb 01 15:03:39 compute-0 setroubleshoot[206654]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 18fce6ee-04e5-42cf-97df-eb8e56d9670c
Feb 01 15:03:39 compute-0 setroubleshoot[206654]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Feb 01 15:03:39 compute-0 ceph-mon[75179]: pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:39 compute-0 sudo[207051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnungbpokqixlzmuhnbnfnvybodocdpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958219.1339724-1064-65022364080676/AnsiballZ_systemd.py'
Feb 01 15:03:39 compute-0 sudo[207051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:39 compute-0 python3.9[207053]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 01 15:03:39 compute-0 systemd[1]: Reloading.
Feb 01 15:03:39 compute-0 systemd-rc-local-generator[207080]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:03:39 compute-0 systemd-sysv-generator[207084]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:03:40 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:40 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Feb 01 15:03:40 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Feb 01 15:03:40 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Feb 01 15:03:40 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Feb 01 15:03:40 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Feb 01 15:03:40 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Feb 01 15:03:40 compute-0 systemd[1]: Starting libvirt secret daemon...
Feb 01 15:03:40 compute-0 systemd[1]: Started libvirt secret daemon.
Feb 01 15:03:40 compute-0 sudo[207051]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:40 compute-0 podman[207090]: 2026-02-01 15:03:40.154213678 +0000 UTC m=+0.129714299 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Feb 01 15:03:40 compute-0 podman[207091]: 2026-02-01 15:03:40.159902188 +0000 UTC m=+0.137732575 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 01 15:03:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:03:40 compute-0 sudo[207305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocjtlofabyeblpvqwjghvrocgopiwgtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958220.413224-1101-31546907593063/AnsiballZ_file.py'
Feb 01 15:03:40 compute-0 sudo[207305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:40 compute-0 python3.9[207307]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:40 compute-0 sudo[207305]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:41 compute-0 ceph-mon[75179]: pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:41 compute-0 sudo[207457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sulyppbcsaysdblhegvabpwgpiwebtsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958221.0074697-1109-154246818655660/AnsiballZ_find.py'
Feb 01 15:03:41 compute-0 sudo[207457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:41 compute-0 python3.9[207459]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 01 15:03:41 compute-0 sudo[207457]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:41 compute-0 sudo[207609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qahzbdssoixjorqeeoguymbrnkekrydy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958221.600149-1117-21837080628303/AnsiballZ_command.py'
Feb 01 15:03:41 compute-0 sudo[207609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:42 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:42 compute-0 python3.9[207611]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:03:42 compute-0 sudo[207609]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:42 compute-0 python3.9[207765]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 01 15:03:43 compute-0 ceph-mon[75179]: pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:43 compute-0 python3.9[207915]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:03:44 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:44 compute-0 python3.9[208036]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769958223.1088572-1136-164663582789640/.source.xml follow=False _original_basename=secret.xml.j2 checksum=0167405d65199c76e23e57ae481d8cd31475ef34 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:44 compute-0 sudo[208186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cprrzbbgfrfzduaxfxzrxrbthabqrfeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958224.321526-1151-71153576670617/AnsiballZ_command.py'
Feb 01 15:03:44 compute-0 sudo[208186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:44 compute-0 python3.9[208188]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:03:44 compute-0 polkitd[43475]: Registered Authentication Agent for unix-process:208190:280684 (system bus name :1.2508 [pkttyagent --process 208190 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Feb 01 15:03:44 compute-0 polkitd[43475]: Unregistered Authentication Agent for unix-process:208190:280684 (system bus name :1.2508, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Feb 01 15:03:44 compute-0 polkitd[43475]: Registered Authentication Agent for unix-process:208189:280684 (system bus name :1.2509 [pkttyagent --process 208189 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Feb 01 15:03:44 compute-0 polkitd[43475]: Unregistered Authentication Agent for unix-process:208189:280684 (system bus name :1.2509, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Feb 01 15:03:44 compute-0 sudo[208186]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:45 compute-0 ceph-mon[75179]: pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:45 compute-0 python3.9[208350]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:03:45 compute-0 sudo[208500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srldulanaoiwajniiuvxoegrhfjvzfni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958225.6646194-1167-38076936265668/AnsiballZ_command.py'
Feb 01 15:03:45 compute-0 sudo[208500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:46 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:46 compute-0 sudo[208500]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:46 compute-0 sudo[208653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnydhxdyesqtpkimyodpnqbxwiifadkl ; FSID=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f KEY=AQD1Z39pAAAAABAAx9bXBCrv3oQqUCtEn4NgxQ== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958226.4152353-1175-29911954414024/AnsiballZ_command.py'
Feb 01 15:03:46 compute-0 sudo[208653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:46 compute-0 polkitd[43475]: Registered Authentication Agent for unix-process:208656:280908 (system bus name :1.2512 [pkttyagent --process 208656 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Feb 01 15:03:46 compute-0 polkitd[43475]: Unregistered Authentication Agent for unix-process:208656:280908 (system bus name :1.2512, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Feb 01 15:03:47 compute-0 sudo[208653]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:47 compute-0 ceph-mon[75179]: pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:47 compute-0 sudo[208811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syzwdbzjcpjixahdrgklascrhrtukkox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958227.3093379-1183-34019433509771/AnsiballZ_copy.py'
Feb 01 15:03:47 compute-0 sudo[208811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:47 compute-0 python3.9[208813]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:47 compute-0 sudo[208811]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:03:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:03:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:03:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:03:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:03:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:03:48 compute-0 ceph-mon[75179]: pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:48 compute-0 sudo[208913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:03:48 compute-0 sudo[208913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:03:48 compute-0 sudo[208913]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:48 compute-0 sudo[208941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 15:03:48 compute-0 sudo[208941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:03:49 compute-0 sudo[209013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wswpbrajyjyktxzlxftninulrhvwjmpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958228.1597445-1191-203415897450854/AnsiballZ_stat.py'
Feb 01 15:03:49 compute-0 sudo[209013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:49 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Feb 01 15:03:49 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.054s CPU time.
Feb 01 15:03:49 compute-0 python3.9[209015]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:03:49 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Feb 01 15:03:49 compute-0 sudo[209013]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:49 compute-0 sudo[208941]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:03:49 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:03:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 15:03:49 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:03:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 15:03:49 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:03:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 15:03:49 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:03:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 15:03:49 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:03:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:03:49 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:03:49 compute-0 sudo[209117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:03:49 compute-0 sudo[209117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:03:49 compute-0 sudo[209117]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:49 compute-0 sudo[209142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 15:03:49 compute-0 sudo[209142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:03:49 compute-0 sudo[209217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alfsgdvzgiuhfhemlzuvlrkhwvsdseqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958228.1597445-1191-203415897450854/AnsiballZ_copy.py'
Feb 01 15:03:49 compute-0 sudo[209217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:49 compute-0 python3.9[209219]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769958228.1597445-1191-203415897450854/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:49 compute-0 sudo[209217]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:49 compute-0 podman[209232]: 2026-02-01 15:03:49.775443383 +0000 UTC m=+0.045056998 container create b85433fb745c96b4bf7ae9ea1b3702b9a64ddbe5ae5529994de857cf28d53326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 01 15:03:49 compute-0 systemd[1]: Started libpod-conmon-b85433fb745c96b4bf7ae9ea1b3702b9a64ddbe5ae5529994de857cf28d53326.scope.
Feb 01 15:03:49 compute-0 podman[209232]: 2026-02-01 15:03:49.750722927 +0000 UTC m=+0.020336622 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:03:49 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:03:49 compute-0 podman[209232]: 2026-02-01 15:03:49.8706165 +0000 UTC m=+0.140230165 container init b85433fb745c96b4bf7ae9ea1b3702b9a64ddbe5ae5529994de857cf28d53326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_euler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:03:49 compute-0 podman[209232]: 2026-02-01 15:03:49.876119694 +0000 UTC m=+0.145733309 container start b85433fb745c96b4bf7ae9ea1b3702b9a64ddbe5ae5529994de857cf28d53326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 01 15:03:49 compute-0 podman[209232]: 2026-02-01 15:03:49.879574482 +0000 UTC m=+0.149188117 container attach b85433fb745c96b4bf7ae9ea1b3702b9a64ddbe5ae5529994de857cf28d53326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_euler, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:03:49 compute-0 loving_euler[209273]: 167 167
Feb 01 15:03:49 compute-0 systemd[1]: libpod-b85433fb745c96b4bf7ae9ea1b3702b9a64ddbe5ae5529994de857cf28d53326.scope: Deactivated successfully.
Feb 01 15:03:49 compute-0 podman[209232]: 2026-02-01 15:03:49.882775812 +0000 UTC m=+0.152389437 container died b85433fb745c96b4bf7ae9ea1b3702b9a64ddbe5ae5529994de857cf28d53326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_euler, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 01 15:03:49 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:03:49 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:03:49 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:03:49 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:03:49 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:03:49 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:03:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-458779b26d187319493e734260cbc71939ba74bc245236c516c69197eba97c84-merged.mount: Deactivated successfully.
Feb 01 15:03:49 compute-0 podman[209232]: 2026-02-01 15:03:49.921976584 +0000 UTC m=+0.191590209 container remove b85433fb745c96b4bf7ae9ea1b3702b9a64ddbe5ae5529994de857cf28d53326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 01 15:03:49 compute-0 systemd[1]: libpod-conmon-b85433fb745c96b4bf7ae9ea1b3702b9a64ddbe5ae5529994de857cf28d53326.scope: Deactivated successfully.
Feb 01 15:03:50 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:50 compute-0 podman[209320]: 2026-02-01 15:03:50.08891841 +0000 UTC m=+0.053346982 container create ebe30a723dc03f98633c9257e02c609cdcabc8709290866c2dd00ed3c364816b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:03:50 compute-0 podman[209320]: 2026-02-01 15:03:50.067455336 +0000 UTC m=+0.031883888 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:03:50 compute-0 systemd[1]: Started libpod-conmon-ebe30a723dc03f98633c9257e02c609cdcabc8709290866c2dd00ed3c364816b.scope.
Feb 01 15:03:50 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903616566b3c763b4b58c8f385acebc8aa2f1d07c5abcf27b7acd02a290fedba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903616566b3c763b4b58c8f385acebc8aa2f1d07c5abcf27b7acd02a290fedba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903616566b3c763b4b58c8f385acebc8aa2f1d07c5abcf27b7acd02a290fedba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903616566b3c763b4b58c8f385acebc8aa2f1d07c5abcf27b7acd02a290fedba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903616566b3c763b4b58c8f385acebc8aa2f1d07c5abcf27b7acd02a290fedba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 15:03:50 compute-0 podman[209320]: 2026-02-01 15:03:50.240518933 +0000 UTC m=+0.204947495 container init ebe30a723dc03f98633c9257e02c609cdcabc8709290866c2dd00ed3c364816b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_diffie, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 01 15:03:50 compute-0 podman[209320]: 2026-02-01 15:03:50.248746165 +0000 UTC m=+0.213174737 container start ebe30a723dc03f98633c9257e02c609cdcabc8709290866c2dd00ed3c364816b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 01 15:03:50 compute-0 podman[209320]: 2026-02-01 15:03:50.252705246 +0000 UTC m=+0.217133898 container attach ebe30a723dc03f98633c9257e02c609cdcabc8709290866c2dd00ed3c364816b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_diffie, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:03:50 compute-0 sudo[209446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urhtatedjlrizoqetazruuudajorphqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958230.0228207-1207-107130111241999/AnsiballZ_file.py'
Feb 01 15:03:50 compute-0 sudo[209446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:03:50 compute-0 sshd-session[209363]: Invalid user sol from 80.94.92.171 port 49578
Feb 01 15:03:50 compute-0 python3.9[209450]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:50 compute-0 pedantic_diffie[209374]: --> passed data devices: 0 physical, 3 LVM
Feb 01 15:03:50 compute-0 pedantic_diffie[209374]: --> All data devices are unavailable
Feb 01 15:03:50 compute-0 sudo[209446]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:50 compute-0 systemd[1]: libpod-ebe30a723dc03f98633c9257e02c609cdcabc8709290866c2dd00ed3c364816b.scope: Deactivated successfully.
Feb 01 15:03:50 compute-0 podman[209320]: 2026-02-01 15:03:50.763888274 +0000 UTC m=+0.728316816 container died ebe30a723dc03f98633c9257e02c609cdcabc8709290866c2dd00ed3c364816b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_diffie, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Feb 01 15:03:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-903616566b3c763b4b58c8f385acebc8aa2f1d07c5abcf27b7acd02a290fedba-merged.mount: Deactivated successfully.
Feb 01 15:03:50 compute-0 podman[209320]: 2026-02-01 15:03:50.812263254 +0000 UTC m=+0.776691826 container remove ebe30a723dc03f98633c9257e02c609cdcabc8709290866c2dd00ed3c364816b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_diffie, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 01 15:03:50 compute-0 systemd[1]: libpod-conmon-ebe30a723dc03f98633c9257e02c609cdcabc8709290866c2dd00ed3c364816b.scope: Deactivated successfully.
Feb 01 15:03:50 compute-0 sudo[209142]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:50 compute-0 sshd-session[209363]: Connection closed by invalid user sol 80.94.92.171 port 49578 [preauth]
Feb 01 15:03:50 compute-0 ceph-mon[75179]: pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:50 compute-0 sudo[209518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:03:50 compute-0 sudo[209518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:03:50 compute-0 sudo[209518]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:51 compute-0 sudo[209571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 15:03:51 compute-0 sudo[209571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:03:51 compute-0 sudo[209677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svhobutnktcnvriterchpmrajeuebepz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958230.8937428-1215-204065892627958/AnsiballZ_stat.py'
Feb 01 15:03:51 compute-0 sudo[209677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:51 compute-0 podman[209690]: 2026-02-01 15:03:51.349776622 +0000 UTC m=+0.064590417 container create 86ba32a2b2412d88217ba9f8587650006d909e0d5635c167dc1a00271f8e82d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:03:51 compute-0 systemd[1]: Started libpod-conmon-86ba32a2b2412d88217ba9f8587650006d909e0d5635c167dc1a00271f8e82d2.scope.
Feb 01 15:03:51 compute-0 podman[209690]: 2026-02-01 15:03:51.319877202 +0000 UTC m=+0.034690997 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:03:51 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:03:51 compute-0 podman[209690]: 2026-02-01 15:03:51.446667708 +0000 UTC m=+0.161481563 container init 86ba32a2b2412d88217ba9f8587650006d909e0d5635c167dc1a00271f8e82d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_goldstine, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 01 15:03:51 compute-0 python3.9[209688]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:03:51 compute-0 podman[209690]: 2026-02-01 15:03:51.45743034 +0000 UTC m=+0.172244135 container start 86ba32a2b2412d88217ba9f8587650006d909e0d5635c167dc1a00271f8e82d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_goldstine, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:03:51 compute-0 podman[209690]: 2026-02-01 15:03:51.462349039 +0000 UTC m=+0.177162834 container attach 86ba32a2b2412d88217ba9f8587650006d909e0d5635c167dc1a00271f8e82d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 01 15:03:51 compute-0 zealous_goldstine[209707]: 167 167
Feb 01 15:03:51 compute-0 systemd[1]: libpod-86ba32a2b2412d88217ba9f8587650006d909e0d5635c167dc1a00271f8e82d2.scope: Deactivated successfully.
Feb 01 15:03:51 compute-0 podman[209690]: 2026-02-01 15:03:51.463740708 +0000 UTC m=+0.178554503 container died 86ba32a2b2412d88217ba9f8587650006d909e0d5635c167dc1a00271f8e82d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:03:51 compute-0 sudo[209677]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-0897498f0f2da865418ddf8869f3a47577d417e0c11ff9b628d30937da3a39b2-merged.mount: Deactivated successfully.
Feb 01 15:03:51 compute-0 podman[209690]: 2026-02-01 15:03:51.508119976 +0000 UTC m=+0.222933731 container remove 86ba32a2b2412d88217ba9f8587650006d909e0d5635c167dc1a00271f8e82d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_goldstine, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:03:51 compute-0 systemd[1]: libpod-conmon-86ba32a2b2412d88217ba9f8587650006d909e0d5635c167dc1a00271f8e82d2.scope: Deactivated successfully.
Feb 01 15:03:51 compute-0 podman[209760]: 2026-02-01 15:03:51.670359889 +0000 UTC m=+0.042723082 container create a17243135f3956c78ff1912129911daeb552f0ef45e027bf6705671484fa9716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wu, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:03:51 compute-0 systemd[1]: Started libpod-conmon-a17243135f3956c78ff1912129911daeb552f0ef45e027bf6705671484fa9716.scope.
Feb 01 15:03:51 compute-0 podman[209760]: 2026-02-01 15:03:51.655204283 +0000 UTC m=+0.027567496 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:03:51 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:03:51 compute-0 sudo[209824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whkopddvlhnutkbrosznczresscuvbib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958230.8937428-1215-204065892627958/AnsiballZ_file.py'
Feb 01 15:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d70dcf30e59e2799990e1842d50c067252e0c038a5289281e08770e6bb2505/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d70dcf30e59e2799990e1842d50c067252e0c038a5289281e08770e6bb2505/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d70dcf30e59e2799990e1842d50c067252e0c038a5289281e08770e6bb2505/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d70dcf30e59e2799990e1842d50c067252e0c038a5289281e08770e6bb2505/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:03:51 compute-0 sudo[209824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:51 compute-0 podman[209760]: 2026-02-01 15:03:51.789048207 +0000 UTC m=+0.161411460 container init a17243135f3956c78ff1912129911daeb552f0ef45e027bf6705671484fa9716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wu, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:03:51 compute-0 podman[209760]: 2026-02-01 15:03:51.801337373 +0000 UTC m=+0.173700606 container start a17243135f3956c78ff1912129911daeb552f0ef45e027bf6705671484fa9716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 01 15:03:51 compute-0 podman[209760]: 2026-02-01 15:03:51.805811819 +0000 UTC m=+0.178175062 container attach a17243135f3956c78ff1912129911daeb552f0ef45e027bf6705671484fa9716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:03:51 compute-0 python3.9[209827]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:52 compute-0 sudo[209824]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:52 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:52 compute-0 sleepy_wu[209822]: {
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:     "0": [
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:         {
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "devices": [
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "/dev/loop3"
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             ],
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "lv_name": "ceph_lv0",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "lv_size": "21470642176",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "name": "ceph_lv0",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "tags": {
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.cluster_name": "ceph",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.crush_device_class": "",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.encrypted": "0",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.objectstore": "bluestore",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.osd_id": "0",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.type": "block",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.vdo": "0",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.with_tpm": "0"
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             },
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "type": "block",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "vg_name": "ceph_vg0"
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:         }
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:     ],
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:     "1": [
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:         {
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "devices": [
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "/dev/loop4"
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             ],
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "lv_name": "ceph_lv1",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "lv_size": "21470642176",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "name": "ceph_lv1",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "tags": {
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.cluster_name": "ceph",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.crush_device_class": "",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.encrypted": "0",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.objectstore": "bluestore",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.osd_id": "1",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.type": "block",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.vdo": "0",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.with_tpm": "0"
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             },
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "type": "block",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "vg_name": "ceph_vg1"
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:         }
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:     ],
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:     "2": [
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:         {
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "devices": [
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "/dev/loop5"
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             ],
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "lv_name": "ceph_lv2",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "lv_size": "21470642176",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "name": "ceph_lv2",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "tags": {
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.cluster_name": "ceph",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.crush_device_class": "",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.encrypted": "0",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.objectstore": "bluestore",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.osd_id": "2",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.type": "block",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.vdo": "0",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:                 "ceph.with_tpm": "0"
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             },
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "type": "block",
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:             "vg_name": "ceph_vg2"
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:         }
Feb 01 15:03:52 compute-0 sleepy_wu[209822]:     ]
Feb 01 15:03:52 compute-0 sleepy_wu[209822]: }
Feb 01 15:03:52 compute-0 systemd[1]: libpod-a17243135f3956c78ff1912129911daeb552f0ef45e027bf6705671484fa9716.scope: Deactivated successfully.
Feb 01 15:03:52 compute-0 podman[209760]: 2026-02-01 15:03:52.125319415 +0000 UTC m=+0.497682618 container died a17243135f3956c78ff1912129911daeb552f0ef45e027bf6705671484fa9716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:03:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-12d70dcf30e59e2799990e1842d50c067252e0c038a5289281e08770e6bb2505-merged.mount: Deactivated successfully.
Feb 01 15:03:52 compute-0 podman[209760]: 2026-02-01 15:03:52.159414324 +0000 UTC m=+0.531777517 container remove a17243135f3956c78ff1912129911daeb552f0ef45e027bf6705671484fa9716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_wu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:03:52 compute-0 systemd[1]: libpod-conmon-a17243135f3956c78ff1912129911daeb552f0ef45e027bf6705671484fa9716.scope: Deactivated successfully.
Feb 01 15:03:52 compute-0 sudo[209571]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:52 compute-0 sudo[209891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:03:52 compute-0 sudo[209891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:03:52 compute-0 sudo[209891]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:52 compute-0 sudo[209945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 15:03:52 compute-0 sudo[209945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:03:52 compute-0 sudo[210043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwdduejghmsjlyxlquqodxbsonkjoqkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958232.2037475-1227-228069590948575/AnsiballZ_stat.py'
Feb 01 15:03:52 compute-0 sudo[210043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:52 compute-0 podman[210056]: 2026-02-01 15:03:52.604409569 +0000 UTC m=+0.062701244 container create 85ca591224f1d8115adda55ec51c1da0024e27a0980cc4699b683070eabb2853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 01 15:03:52 compute-0 systemd[1]: Started libpod-conmon-85ca591224f1d8115adda55ec51c1da0024e27a0980cc4699b683070eabb2853.scope.
Feb 01 15:03:52 compute-0 podman[210056]: 2026-02-01 15:03:52.580324132 +0000 UTC m=+0.038615877 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:03:52 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:03:52 compute-0 podman[210056]: 2026-02-01 15:03:52.694952596 +0000 UTC m=+0.153244281 container init 85ca591224f1d8115adda55ec51c1da0024e27a0980cc4699b683070eabb2853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:03:52 compute-0 python3.9[210055]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:03:52 compute-0 podman[210056]: 2026-02-01 15:03:52.702888989 +0000 UTC m=+0.161180664 container start 85ca591224f1d8115adda55ec51c1da0024e27a0980cc4699b683070eabb2853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:03:52 compute-0 podman[210056]: 2026-02-01 15:03:52.706395158 +0000 UTC m=+0.164686853 container attach 85ca591224f1d8115adda55ec51c1da0024e27a0980cc4699b683070eabb2853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 01 15:03:52 compute-0 mystifying_boyd[210072]: 167 167
Feb 01 15:03:52 compute-0 systemd[1]: libpod-85ca591224f1d8115adda55ec51c1da0024e27a0980cc4699b683070eabb2853.scope: Deactivated successfully.
Feb 01 15:03:52 compute-0 podman[210056]: 2026-02-01 15:03:52.708545278 +0000 UTC m=+0.166836983 container died 85ca591224f1d8115adda55ec51c1da0024e27a0980cc4699b683070eabb2853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 01 15:03:52 compute-0 sudo[210043]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e199c1ece1155068e44a6720d012361457eb9f1e0923911eec76dc416193485-merged.mount: Deactivated successfully.
Feb 01 15:03:52 compute-0 podman[210056]: 2026-02-01 15:03:52.754958974 +0000 UTC m=+0.213250679 container remove 85ca591224f1d8115adda55ec51c1da0024e27a0980cc4699b683070eabb2853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_boyd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 01 15:03:52 compute-0 systemd[1]: libpod-conmon-85ca591224f1d8115adda55ec51c1da0024e27a0980cc4699b683070eabb2853.scope: Deactivated successfully.
Feb 01 15:03:52 compute-0 podman[210121]: 2026-02-01 15:03:52.913670718 +0000 UTC m=+0.055918624 container create ba8394e7a295d5610967e187aa8107c9db08b08d7ddf4fb4adf45779c28451bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:03:52 compute-0 systemd[1]: Started libpod-conmon-ba8394e7a295d5610967e187aa8107c9db08b08d7ddf4fb4adf45779c28451bd.scope.
Feb 01 15:03:52 compute-0 podman[210121]: 2026-02-01 15:03:52.884369323 +0000 UTC m=+0.026617279 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:03:52 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d86ca1292d037d4fc0e129c07a528a96b217061bfdd6124957fada95d985d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d86ca1292d037d4fc0e129c07a528a96b217061bfdd6124957fada95d985d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d86ca1292d037d4fc0e129c07a528a96b217061bfdd6124957fada95d985d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d86ca1292d037d4fc0e129c07a528a96b217061bfdd6124957fada95d985d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:03:53 compute-0 podman[210121]: 2026-02-01 15:03:53.010899042 +0000 UTC m=+0.153146938 container init ba8394e7a295d5610967e187aa8107c9db08b08d7ddf4fb4adf45779c28451bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:03:53 compute-0 podman[210121]: 2026-02-01 15:03:53.021752097 +0000 UTC m=+0.163999973 container start ba8394e7a295d5610967e187aa8107c9db08b08d7ddf4fb4adf45779c28451bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 01 15:03:53 compute-0 podman[210121]: 2026-02-01 15:03:53.025334488 +0000 UTC m=+0.167582374 container attach ba8394e7a295d5610967e187aa8107c9db08b08d7ddf4fb4adf45779c28451bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bouman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 01 15:03:53 compute-0 sudo[210191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afcsmlacgkqindoaejrbfugsditftyrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958232.2037475-1227-228069590948575/AnsiballZ_file.py'
Feb 01 15:03:53 compute-0 sudo[210191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:53 compute-0 ceph-mon[75179]: pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:53 compute-0 python3.9[210195]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.n3lznqwz recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:53 compute-0 sudo[210191]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:53 compute-0 sudo[210415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktxzaobzjeqviklxmnnoakpdgwxidvyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958233.4125524-1239-158685687248894/AnsiballZ_stat.py'
Feb 01 15:03:53 compute-0 sudo[210415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:53 compute-0 lvm[210421]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:03:53 compute-0 lvm[210421]: VG ceph_vg0 finished
Feb 01 15:03:53 compute-0 lvm[210422]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:03:53 compute-0 lvm[210422]: VG ceph_vg1 finished
Feb 01 15:03:53 compute-0 lvm[210424]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:03:53 compute-0 lvm[210424]: VG ceph_vg2 finished
Feb 01 15:03:53 compute-0 lvm[210425]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:03:53 compute-0 lvm[210425]: VG ceph_vg2 finished
Feb 01 15:03:53 compute-0 nervous_bouman[210162]: {}
Feb 01 15:03:53 compute-0 systemd[1]: libpod-ba8394e7a295d5610967e187aa8107c9db08b08d7ddf4fb4adf45779c28451bd.scope: Deactivated successfully.
Feb 01 15:03:53 compute-0 podman[210121]: 2026-02-01 15:03:53.845097495 +0000 UTC m=+0.987345361 container died ba8394e7a295d5610967e187aa8107c9db08b08d7ddf4fb4adf45779c28451bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 01 15:03:53 compute-0 systemd[1]: libpod-ba8394e7a295d5610967e187aa8107c9db08b08d7ddf4fb4adf45779c28451bd.scope: Consumed 1.211s CPU time.
Feb 01 15:03:53 compute-0 python3.9[210418]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:03:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-23d86ca1292d037d4fc0e129c07a528a96b217061bfdd6124957fada95d985d3-merged.mount: Deactivated successfully.
Feb 01 15:03:53 compute-0 podman[210121]: 2026-02-01 15:03:53.881324634 +0000 UTC m=+1.023572500 container remove ba8394e7a295d5610967e187aa8107c9db08b08d7ddf4fb4adf45779c28451bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bouman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:03:53 compute-0 systemd[1]: libpod-conmon-ba8394e7a295d5610967e187aa8107c9db08b08d7ddf4fb4adf45779c28451bd.scope: Deactivated successfully.
Feb 01 15:03:53 compute-0 sudo[210415]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:53 compute-0 sudo[209945]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:03:53 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:03:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:03:53 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:03:54 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:54 compute-0 sudo[210462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 15:03:54 compute-0 sudo[210462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:03:54 compute-0 sudo[210462]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:54 compute-0 sudo[210540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnjmywcgtdgjssxzjxdmlvvxfgxtwihm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958233.4125524-1239-158685687248894/AnsiballZ_file.py'
Feb 01 15:03:54 compute-0 sudo[210540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:54 compute-0 python3.9[210542]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:54 compute-0 sudo[210540]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:54 compute-0 sudo[210692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubrglossbpcvqhvdjcgvtqolbkqwywsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958234.5875528-1252-241199231962175/AnsiballZ_command.py'
Feb 01 15:03:54 compute-0 sudo[210692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:54 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:03:54 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:03:54 compute-0 ceph-mon[75179]: pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:55 compute-0 python3.9[210694]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:03:55 compute-0 sudo[210692]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:03:55 compute-0 sudo[210845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvbcdcohcawljnfhrwaroscsuhbnxcos ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769958235.3030927-1260-54302578402513/AnsiballZ_edpm_nftables_from_files.py'
Feb 01 15:03:55 compute-0 sudo[210845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:55 compute-0 python3[210847]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 01 15:03:55 compute-0 sudo[210845]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:56 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:56 compute-0 sudo[210997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujtqxpyzeszdtlprpdptpazjatsffkjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958236.101887-1268-105398413731802/AnsiballZ_stat.py'
Feb 01 15:03:56 compute-0 sudo[210997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:56 compute-0 python3.9[210999]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:03:56 compute-0 sudo[210997]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:56 compute-0 sudo[211075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcqtgsnvibkcsfzqlywgltxkdibqcehm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958236.101887-1268-105398413731802/AnsiballZ_file.py'
Feb 01 15:03:56 compute-0 sudo[211075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:57 compute-0 ceph-mon[75179]: pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:57 compute-0 python3.9[211077]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:57 compute-0 sudo[211075]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:57 compute-0 sudo[211227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wszicmotgjfvukwahtmemgzdqqmjsgrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958237.3567708-1280-63996755143542/AnsiballZ_stat.py'
Feb 01 15:03:57 compute-0 sudo[211227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:57 compute-0 python3.9[211229]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:03:57 compute-0 sudo[211227]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:58 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:58 compute-0 sudo[211352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzgdcmjhbebkbizkmeutkbhsjjiayjoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958237.3567708-1280-63996755143542/AnsiballZ_copy.py'
Feb 01 15:03:58 compute-0 sudo[211352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:58 compute-0 python3.9[211354]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958237.3567708-1280-63996755143542/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:58 compute-0 sudo[211352]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:59 compute-0 sudo[211504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trjhklbxwwdlnxyqcmnsbnaynvvgaogl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958238.7624905-1295-179547816917089/AnsiballZ_stat.py'
Feb 01 15:03:59 compute-0 sudo[211504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:59 compute-0 ceph-mon[75179]: pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:03:59 compute-0 python3.9[211506]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:03:59 compute-0 sudo[211504]: pam_unix(sudo:session): session closed for user root
Feb 01 15:03:59 compute-0 sudo[211582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcpcootfehinhuvnxuabbvzvojdwpshg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958238.7624905-1295-179547816917089/AnsiballZ_file.py'
Feb 01 15:03:59 compute-0 sudo[211582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:03:59 compute-0 python3.9[211584]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:03:59 compute-0 sudo[211582]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:00 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:00 compute-0 sudo[211734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxsmhyxnlhsnlerrhbgdyeyvqfpcnhng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958239.8876588-1307-183436561387788/AnsiballZ_stat.py'
Feb 01 15:04:00 compute-0 sudo[211734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:00 compute-0 python3.9[211736]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:04:00 compute-0 sudo[211734]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:04:00 compute-0 sudo[211812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmbeezbgnpckmbqhnjgjqrhxposocvaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958239.8876588-1307-183436561387788/AnsiballZ_file.py'
Feb 01 15:04:00 compute-0 sudo[211812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:00 compute-0 python3.9[211814]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:04:00 compute-0 sudo[211812]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:01 compute-0 ceph-mon[75179]: pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.121806) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958241121841, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2046, "num_deletes": 251, "total_data_size": 3579034, "memory_usage": 3629568, "flush_reason": "Manual Compaction"}
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958241138884, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3491766, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9698, "largest_seqno": 11743, "table_properties": {"data_size": 3482469, "index_size": 5919, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17926, "raw_average_key_size": 19, "raw_value_size": 3464030, "raw_average_value_size": 3765, "num_data_blocks": 269, "num_entries": 920, "num_filter_entries": 920, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769958008, "oldest_key_time": 1769958008, "file_creation_time": 1769958241, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 17129 microseconds, and 4533 cpu microseconds.
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.138936) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3491766 bytes OK
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.138957) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.140596) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.140613) EVENT_LOG_v1 {"time_micros": 1769958241140607, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.140633) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3570485, prev total WAL file size 3570485, number of live WAL files 2.
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.141342) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3409KB)], [26(6003KB)]
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958241141378, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9639399, "oldest_snapshot_seqno": -1}
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3701 keys, 8050904 bytes, temperature: kUnknown
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958241169624, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8050904, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8022401, "index_size": 18153, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88843, "raw_average_key_size": 24, "raw_value_size": 7951828, "raw_average_value_size": 2148, "num_data_blocks": 787, "num_entries": 3701, "num_filter_entries": 3701, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769958241, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.169823) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8050904 bytes
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.170995) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 340.6 rd, 284.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.9 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4215, records dropped: 514 output_compression: NoCompression
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.171015) EVENT_LOG_v1 {"time_micros": 1769958241171004, "job": 10, "event": "compaction_finished", "compaction_time_micros": 28305, "compaction_time_cpu_micros": 15326, "output_level": 6, "num_output_files": 1, "total_output_size": 8050904, "num_input_records": 4215, "num_output_records": 3701, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958241171413, "job": 10, "event": "table_file_deletion", "file_number": 28}
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958241171895, "job": 10, "event": "table_file_deletion", "file_number": 26}
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.141261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.171955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.171960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.171961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.171963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:04:01 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:04:01.171964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:04:01 compute-0 sudo[211964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvrvdobzuovdrnirdgsenssbyalnpmeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958240.9854302-1319-183483607404301/AnsiballZ_stat.py'
Feb 01 15:04:01 compute-0 sudo[211964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:01 compute-0 python3.9[211966]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:04:01 compute-0 sudo[211964]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:01 compute-0 sudo[212089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hztknxfkaxrxfpbtxafvzavnymktmzsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958240.9854302-1319-183483607404301/AnsiballZ_copy.py'
Feb 01 15:04:01 compute-0 sudo[212089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:01 compute-0 python3.9[212091]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769958240.9854302-1319-183483607404301/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:04:01 compute-0 sudo[212089]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:02 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:02 compute-0 sudo[212241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzapzdbinbtbaeltjumygjxqabgerasw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958242.1818087-1334-269684718049176/AnsiballZ_file.py'
Feb 01 15:04:02 compute-0 sudo[212241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:02 compute-0 python3.9[212243]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:04:02 compute-0 sudo[212241]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:03 compute-0 sudo[212393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pytnrwsjgzfdanirxnbmxfhomwcvmfxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958242.7799299-1342-281027319734318/AnsiballZ_command.py'
Feb 01 15:04:03 compute-0 sudo[212393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:03 compute-0 ceph-mon[75179]: pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:03 compute-0 python3.9[212395]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:04:03 compute-0 sudo[212393]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:03 compute-0 sudo[212548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmkntmqgncpeaizyijgipspryraulqiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958243.481118-1350-278591323726969/AnsiballZ_blockinfile.py'
Feb 01 15:04:03 compute-0 sudo[212548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:04 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:04 compute-0 python3.9[212550]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:04:04 compute-0 sudo[212548]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:04 compute-0 sudo[212700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sivbgldhgmxwbfpwcgwrhixqxchjjqem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958244.4364111-1359-156722327276117/AnsiballZ_command.py'
Feb 01 15:04:04 compute-0 sudo[212700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:04 compute-0 python3.9[212702]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:04:04 compute-0 sudo[212700]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:05 compute-0 ceph-mon[75179]: pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:05 compute-0 sudo[212853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jymxkiyqvxaifcmhqtxxpncqihbfyvmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958245.1215315-1367-27473484509437/AnsiballZ_stat.py'
Feb 01 15:04:05 compute-0 sudo[212853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:04:05 compute-0 python3.9[212855]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 15:04:05 compute-0 sudo[212853]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:06 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:06 compute-0 sudo[213007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvtnewpybbnkxosjsvcpxiiboxgynyhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958245.7576344-1375-126252322852390/AnsiballZ_command.py'
Feb 01 15:04:06 compute-0 sudo[213007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:06 compute-0 python3.9[213009]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:04:06 compute-0 sudo[213007]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:06 compute-0 sudo[213162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpfznbyqajqqudumaedqslokydqnbfdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958246.4856431-1383-188292853351385/AnsiballZ_file.py'
Feb 01 15:04:06 compute-0 sudo[213162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:06 compute-0 python3.9[213164]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:04:06 compute-0 sudo[213162]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:07 compute-0 ceph-mon[75179]: pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:07 compute-0 sudo[213314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpnlputqpywhxipwamndpjdhbrjoejra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958247.203211-1391-116251663256016/AnsiballZ_stat.py'
Feb 01 15:04:07 compute-0 sudo[213314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:07 compute-0 python3.9[213316]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:04:07 compute-0 sudo[213314]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:04:07.795 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:04:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:04:07.796 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:04:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:04:07.797 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:04:08 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:08 compute-0 sudo[213437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnvunzylzyumnkhmjxixeaertpbwzxkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958247.203211-1391-116251663256016/AnsiballZ_copy.py'
Feb 01 15:04:08 compute-0 sudo[213437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:08 compute-0 python3.9[213439]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769958247.203211-1391-116251663256016/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:04:08 compute-0 sudo[213437]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:08 compute-0 sudo[213589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dclybftlkxjjowaxwaqzdtqvggzpqxuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958248.432451-1406-156926511530507/AnsiballZ_stat.py'
Feb 01 15:04:08 compute-0 sudo[213589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:08 compute-0 python3.9[213591]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:04:08 compute-0 sudo[213589]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:09 compute-0 ceph-mon[75179]: pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:09 compute-0 sudo[213712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvcnxcgdveeswyrlbigycfjwwnbwrbrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958248.432451-1406-156926511530507/AnsiballZ_copy.py'
Feb 01 15:04:09 compute-0 sudo[213712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:09 compute-0 python3.9[213714]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769958248.432451-1406-156926511530507/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:04:09 compute-0 sudo[213712]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:09 compute-0 sudo[213864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dteighfuheehwgeixlljbsutodhrmvep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958249.5941124-1421-36466559955629/AnsiballZ_stat.py'
Feb 01 15:04:09 compute-0 sudo[213864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:10 compute-0 python3.9[213866]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:04:10 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:10 compute-0 sudo[213864]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:10 compute-0 sudo[214012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtmekeuxtqkojsadzpkmhzauzbxzhbtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958249.5941124-1421-36466559955629/AnsiballZ_copy.py'
Feb 01 15:04:10 compute-0 sudo[214012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:10 compute-0 podman[213961]: 2026-02-01 15:04:10.387016129 +0000 UTC m=+0.051159290 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 01 15:04:10 compute-0 podman[213962]: 2026-02-01 15:04:10.435999366 +0000 UTC m=+0.100029033 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:04:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:04:10 compute-0 python3.9[214022]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769958249.5941124-1421-36466559955629/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:04:10 compute-0 sudo[214012]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:11 compute-0 sudo[214183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbzmdiajclkonrzcmgoondcwpluyrvne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958250.7481518-1436-1459034859131/AnsiballZ_systemd.py'
Feb 01 15:04:11 compute-0 sudo[214183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:11 compute-0 ceph-mon[75179]: pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:11 compute-0 python3.9[214185]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 15:04:11 compute-0 systemd[1]: Reloading.
Feb 01 15:04:11 compute-0 systemd-rc-local-generator[214207]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:04:11 compute-0 systemd-sysv-generator[214210]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:04:11 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Feb 01 15:04:11 compute-0 sudo[214183]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:12 compute-0 sudo[214374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeojgbopoyylisepmbsuusdjemldwkds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958251.7809834-1444-45557028570126/AnsiballZ_systemd.py'
Feb 01 15:04:12 compute-0 sudo[214374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:12 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:12 compute-0 python3.9[214376]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 01 15:04:12 compute-0 systemd[1]: Reloading.
Feb 01 15:04:12 compute-0 systemd-sysv-generator[214406]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:04:12 compute-0 systemd-rc-local-generator[214402]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:04:12 compute-0 systemd[1]: Reloading.
Feb 01 15:04:12 compute-0 systemd-rc-local-generator[214435]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:04:12 compute-0 systemd-sysv-generator[214438]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:04:12 compute-0 sudo[214374]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:13 compute-0 ceph-mon[75179]: pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:13 compute-0 sshd-session[155472]: Connection closed by 192.168.122.30 port 53876
Feb 01 15:04:13 compute-0 sshd-session[155469]: pam_unix(sshd:session): session closed for user zuul
Feb 01 15:04:13 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Feb 01 15:04:13 compute-0 systemd[1]: session-48.scope: Consumed 2min 56.514s CPU time.
Feb 01 15:04:13 compute-0 systemd-logind[786]: Session 48 logged out. Waiting for processes to exit.
Feb 01 15:04:13 compute-0 systemd-logind[786]: Removed session 48.
Feb 01 15:04:14 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:15 compute-0 ceph-mon[75179]: pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:04:16 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:17 compute-0 ceph-mon[75179]: pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:04:17
Feb 01 15:04:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:04:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:04:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'volumes', 'images', 'cephfs.cephfs.data', 'vms', 'default.rgw.control']
Feb 01 15:04:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:04:18 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:04:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:04:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:04:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:04:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:04:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:04:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:04:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:04:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:04:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:04:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:04:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:04:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:04:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:04:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:04:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:04:18 compute-0 sshd-session[214471]: Accepted publickey for zuul from 192.168.122.30 port 60612 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 15:04:18 compute-0 systemd-logind[786]: New session 49 of user zuul.
Feb 01 15:04:18 compute-0 systemd[1]: Started Session 49 of User zuul.
Feb 01 15:04:18 compute-0 sshd-session[214471]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 15:04:19 compute-0 ceph-mon[75179]: pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:19 compute-0 python3.9[214624]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 15:04:20 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:04:20 compute-0 python3.9[214778]: ansible-ansible.builtin.service_facts Invoked
Feb 01 15:04:21 compute-0 network[214795]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 01 15:04:21 compute-0 network[214796]: 'network-scripts' will be removed from distribution in near future.
Feb 01 15:04:21 compute-0 network[214797]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 01 15:04:21 compute-0 ceph-mon[75179]: pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:22 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:23 compute-0 ceph-mon[75179]: pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:24 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:25 compute-0 sudo[215067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwkqdfbwgtpomeezhlchynekhcxjovad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958264.909665-42-26908564681970/AnsiballZ_setup.py'
Feb 01 15:04:25 compute-0 sudo[215067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:25 compute-0 ceph-mon[75179]: pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:25 compute-0 python3.9[215069]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 01 15:04:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:04:25 compute-0 sudo[215067]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:26 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:26 compute-0 sudo[215151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzjxvrvvlnsttfkrvomajcbmypsiygbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958264.909665-42-26908564681970/AnsiballZ_dnf.py'
Feb 01 15:04:26 compute-0 sudo[215151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:26 compute-0 python3.9[215153]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 15:04:27 compute-0 ceph-mon[75179]: pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:04:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:04:29 compute-0 ceph-mon[75179]: pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:30 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:30 compute-0 ceph-mon[75179]: pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:04:31 compute-0 sudo[215151]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:32 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:32 compute-0 sudo[215304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hceayykwhidjenmepzygscrboaprkdrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958271.6140575-54-163564882327122/AnsiballZ_stat.py'
Feb 01 15:04:32 compute-0 sudo[215304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:32 compute-0 python3.9[215306]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 15:04:32 compute-0 sudo[215304]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:33 compute-0 sudo[215456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnouscpuvkqgabfvcdbmjcadkvrthqnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958272.5474901-64-107732811908111/AnsiballZ_command.py'
Feb 01 15:04:33 compute-0 sudo[215456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:33 compute-0 ceph-mon[75179]: pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:33 compute-0 python3.9[215458]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:04:33 compute-0 sudo[215456]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:33 compute-0 sudo[215609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbgtkfbohvrbynbzziqdidqpkuofjbmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958273.534975-74-196684002924918/AnsiballZ_stat.py'
Feb 01 15:04:33 compute-0 sudo[215609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:34 compute-0 python3.9[215611]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 15:04:34 compute-0 sudo[215609]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:34 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:34 compute-0 sudo[215761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjglueiykyeluiayiffmtylckptuinyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958274.2223523-82-180420664847224/AnsiballZ_command.py'
Feb 01 15:04:34 compute-0 sudo[215761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:34 compute-0 python3.9[215763]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:04:34 compute-0 sudo[215761]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:35 compute-0 ceph-mon[75179]: pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:35 compute-0 sudo[215914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhqslehjvdxciyqlonmbgikukoqqcnpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958274.9667573-90-119935617272593/AnsiballZ_stat.py'
Feb 01 15:04:35 compute-0 sudo[215914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:35 compute-0 python3.9[215916]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:04:35 compute-0 sudo[215914]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:04:36 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:36 compute-0 sudo[216037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjltjuuymwfezgavhboumgkbccfntdjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958274.9667573-90-119935617272593/AnsiballZ_copy.py'
Feb 01 15:04:36 compute-0 sudo[216037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:36 compute-0 python3.9[216039]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769958274.9667573-90-119935617272593/.source.iscsi _original_basename=._y78_1le follow=False checksum=3633b0be9514cf75260a947b044d980e360549a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:04:36 compute-0 sudo[216037]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:36 compute-0 sudo[216189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oumidovakmoesgfbwtgyebhwfpzikrxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958276.5313463-105-118159075473976/AnsiballZ_file.py'
Feb 01 15:04:36 compute-0 sudo[216189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:37 compute-0 ceph-mon[75179]: pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:37 compute-0 python3.9[216191]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:04:37 compute-0 sudo[216189]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:37 compute-0 sudo[216341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsygukheodrzknkjdaxjzcwnaddslzya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958277.3722477-113-235566964787836/AnsiballZ_lineinfile.py'
Feb 01 15:04:37 compute-0 sudo[216341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:37 compute-0 python3.9[216343]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:04:37 compute-0 sudo[216341]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:38 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:38 compute-0 sudo[216493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idkixtsvztjtwkadjbilxhauxkcsfekm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958278.2089577-122-209435007539837/AnsiballZ_systemd_service.py'
Feb 01 15:04:38 compute-0 sudo[216493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:39 compute-0 python3.9[216495]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 15:04:39 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Feb 01 15:04:39 compute-0 sudo[216493]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:39 compute-0 ceph-mon[75179]: pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:39 compute-0 sudo[216649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chdkztbongxmrdfbvczgmetedyxhebkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958279.2464015-130-226607697261667/AnsiballZ_systemd_service.py'
Feb 01 15:04:39 compute-0 sudo[216649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:39 compute-0 python3.9[216651]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 15:04:39 compute-0 systemd[1]: Reloading.
Feb 01 15:04:39 compute-0 systemd-rc-local-generator[216675]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:04:39 compute-0 systemd-sysv-generator[216679]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:04:40 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:40 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb 01 15:04:40 compute-0 systemd[1]: Starting Open-iSCSI...
Feb 01 15:04:40 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Feb 01 15:04:40 compute-0 systemd[1]: Started Open-iSCSI.
Feb 01 15:04:40 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Feb 01 15:04:40 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Feb 01 15:04:40 compute-0 sudo[216649]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:04:40 compute-0 podman[216825]: 2026-02-01 15:04:40.831618167 +0000 UTC m=+0.074515837 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 01 15:04:40 compute-0 podman[216826]: 2026-02-01 15:04:40.855139049 +0000 UTC m=+0.097895205 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 01 15:04:40 compute-0 python3.9[216872]: ansible-ansible.builtin.service_facts Invoked
Feb 01 15:04:41 compute-0 network[216911]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 01 15:04:41 compute-0 network[216912]: 'network-scripts' will be removed from distribution in near future.
Feb 01 15:04:41 compute-0 network[216913]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 01 15:04:41 compute-0 ceph-mon[75179]: pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:42 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:43 compute-0 ceph-mon[75179]: pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:43 compute-0 sudo[217183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwvawcvleoaphrdtwppgibekawuzcwtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958283.699362-153-259629494397479/AnsiballZ_dnf.py'
Feb 01 15:04:43 compute-0 sudo[217183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:44 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:44 compute-0 python3.9[217185]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 15:04:45 compute-0 ceph-mon[75179]: pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:04:46 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:46 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 01 15:04:46 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 01 15:04:46 compute-0 systemd[1]: Reloading.
Feb 01 15:04:46 compute-0 systemd-rc-local-generator[217226]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:04:46 compute-0 systemd-sysv-generator[217231]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:04:46 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 01 15:04:46 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 01 15:04:46 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 01 15:04:46 compute-0 systemd[1]: run-r13ccf82e5d1445de864a6ad7bdbb300f.service: Deactivated successfully.
Feb 01 15:04:47 compute-0 sudo[217183]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:47 compute-0 ceph-mon[75179]: pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:47 compute-0 sudo[217501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjifpwbmjzttipdxugnynqgxlmyrqzcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958287.4157264-162-51164382843134/AnsiballZ_file.py'
Feb 01 15:04:47 compute-0 sudo[217501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:47 compute-0 python3.9[217503]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb 01 15:04:47 compute-0 sudo[217501]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:48 compute-0 ceph-mon[75179]: pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:04:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:04:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:04:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:04:48 compute-0 sudo[217653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpmxzpgegwljlgtdiaxulpkvhssfopib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958288.1194303-170-79397137443197/AnsiballZ_modprobe.py'
Feb 01 15:04:48 compute-0 sudo[217653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:48 compute-0 python3.9[217655]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Feb 01 15:04:48 compute-0 sudo[217653]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:04:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:04:49 compute-0 sudo[217809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zemoulkzeahbupxzszrlypprspvkkfbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958288.9089894-178-195300655979873/AnsiballZ_stat.py'
Feb 01 15:04:49 compute-0 sudo[217809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:49 compute-0 python3.9[217811]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:04:49 compute-0 sudo[217809]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:49 compute-0 sudo[217932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rixajxhbafayeoyynwzulafldhumgdzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958288.9089894-178-195300655979873/AnsiballZ_copy.py'
Feb 01 15:04:49 compute-0 sudo[217932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:50 compute-0 python3.9[217934]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769958288.9089894-178-195300655979873/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:04:50 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:50 compute-0 sudo[217932]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:04:50 compute-0 sudo[218084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjmlpyitfeanvtwtucjyftvmdetsoakm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958290.312469-194-193969839030913/AnsiballZ_lineinfile.py'
Feb 01 15:04:50 compute-0 sudo[218084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:50 compute-0 python3.9[218086]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:04:50 compute-0 sudo[218084]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:51 compute-0 ceph-mon[75179]: pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:51 compute-0 sudo[218236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlurgijaffvmyfnyoeupmfylpbynpedq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958290.9736-202-221403541031352/AnsiballZ_systemd.py'
Feb 01 15:04:51 compute-0 sudo[218236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:51 compute-0 python3.9[218238]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 01 15:04:52 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 01 15:04:52 compute-0 systemd[1]: Stopped Load Kernel Modules.
Feb 01 15:04:52 compute-0 systemd[1]: Stopping Load Kernel Modules...
Feb 01 15:04:52 compute-0 systemd[1]: Starting Load Kernel Modules...
Feb 01 15:04:52 compute-0 systemd[1]: Finished Load Kernel Modules.
Feb 01 15:04:52 compute-0 sudo[218236]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:52 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:52 compute-0 sudo[218392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqsiafblantufgwwsqlbxfaquxujpshj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958292.2718267-210-137794468023954/AnsiballZ_command.py'
Feb 01 15:04:52 compute-0 sudo[218392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:52 compute-0 python3.9[218394]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:04:52 compute-0 sudo[218392]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:53 compute-0 ceph-mon[75179]: pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:53 compute-0 sudo[218545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzlqhxjfrlbcpvkxdhzgmyzvqkidughz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958293.112592-220-236344134477086/AnsiballZ_stat.py'
Feb 01 15:04:53 compute-0 sudo[218545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:53 compute-0 python3.9[218547]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 15:04:53 compute-0 sudo[218545]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:54 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:54 compute-0 sudo[218645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:04:54 compute-0 sudo[218645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:04:54 compute-0 sudo[218645]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:54 compute-0 sudo[218672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 15:04:54 compute-0 sudo[218672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:04:54 compute-0 sudo[218747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltkkbsmcglcbopvvjbokgzrjvlcowtoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958293.9196787-229-18455951116993/AnsiballZ_stat.py'
Feb 01 15:04:54 compute-0 sudo[218747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:54 compute-0 python3.9[218749]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:04:54 compute-0 sudo[218747]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:54 compute-0 sudo[218672]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:04:54 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:04:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 15:04:54 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:04:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 15:04:54 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:04:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 15:04:54 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:04:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 15:04:54 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:04:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:04:54 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:04:54 compute-0 sudo[218851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:04:54 compute-0 sudo[218851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:04:54 compute-0 sudo[218851]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:54 compute-0 sudo[218900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 15:04:54 compute-0 sudo[218900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:04:54 compute-0 sudo[218951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnpplbewojyglszerwcalngakkskvbny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958293.9196787-229-18455951116993/AnsiballZ_copy.py'
Feb 01 15:04:54 compute-0 sudo[218951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:55 compute-0 python3.9[218953]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769958293.9196787-229-18455951116993/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:04:55 compute-0 sudo[218951]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:55 compute-0 podman[218965]: 2026-02-01 15:04:55.081059481 +0000 UTC m=+0.046561955 container create c5324e1227d59fadde18582567f55d4b54f22ebc2940f2fdc433d9954abf8a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_wilbur, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:04:55 compute-0 systemd[1]: Started libpod-conmon-c5324e1227d59fadde18582567f55d4b54f22ebc2940f2fdc433d9954abf8a8a.scope.
Feb 01 15:04:55 compute-0 ceph-mon[75179]: pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:55 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:04:55 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:04:55 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:04:55 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:04:55 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:04:55 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:04:55 compute-0 podman[218965]: 2026-02-01 15:04:55.063213271 +0000 UTC m=+0.028715785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:04:55 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:04:55 compute-0 podman[218965]: 2026-02-01 15:04:55.17209107 +0000 UTC m=+0.137593564 container init c5324e1227d59fadde18582567f55d4b54f22ebc2940f2fdc433d9954abf8a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_wilbur, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:04:55 compute-0 podman[218965]: 2026-02-01 15:04:55.177426729 +0000 UTC m=+0.142929213 container start c5324e1227d59fadde18582567f55d4b54f22ebc2940f2fdc433d9954abf8a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 01 15:04:55 compute-0 podman[218965]: 2026-02-01 15:04:55.180829904 +0000 UTC m=+0.146332418 container attach c5324e1227d59fadde18582567f55d4b54f22ebc2940f2fdc433d9954abf8a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:04:55 compute-0 busy_wilbur[219005]: 167 167
Feb 01 15:04:55 compute-0 systemd[1]: libpod-c5324e1227d59fadde18582567f55d4b54f22ebc2940f2fdc433d9954abf8a8a.scope: Deactivated successfully.
Feb 01 15:04:55 compute-0 conmon[219005]: conmon c5324e1227d59fadde18 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c5324e1227d59fadde18582567f55d4b54f22ebc2940f2fdc433d9954abf8a8a.scope/container/memory.events
Feb 01 15:04:55 compute-0 podman[218965]: 2026-02-01 15:04:55.184899618 +0000 UTC m=+0.150402102 container died c5324e1227d59fadde18582567f55d4b54f22ebc2940f2fdc433d9954abf8a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 01 15:04:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6ebe5b7524288676b71059a11d2a5c920446b9653977d8e15c6ec481e8abacc-merged.mount: Deactivated successfully.
Feb 01 15:04:55 compute-0 podman[218965]: 2026-02-01 15:04:55.220384292 +0000 UTC m=+0.185886766 container remove c5324e1227d59fadde18582567f55d4b54f22ebc2940f2fdc433d9954abf8a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_wilbur, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:04:55 compute-0 systemd[1]: libpod-conmon-c5324e1227d59fadde18582567f55d4b54f22ebc2940f2fdc433d9954abf8a8a.scope: Deactivated successfully.
Feb 01 15:04:55 compute-0 podman[219082]: 2026-02-01 15:04:55.375246208 +0000 UTC m=+0.048639163 container create f0bed975f6f73ddea146ab8f771ffdbf94c607fafa44a546ba2a6b15c6f1cc33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williams, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:04:55 compute-0 systemd[1]: Started libpod-conmon-f0bed975f6f73ddea146ab8f771ffdbf94c607fafa44a546ba2a6b15c6f1cc33.scope.
Feb 01 15:04:55 compute-0 podman[219082]: 2026-02-01 15:04:55.356127653 +0000 UTC m=+0.029520658 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:04:55 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:04:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa20a8e704a7623b0b6c3b5d85f4fae2a87489b2f8b64dea3af549cce118fce4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:04:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa20a8e704a7623b0b6c3b5d85f4fae2a87489b2f8b64dea3af549cce118fce4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:04:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa20a8e704a7623b0b6c3b5d85f4fae2a87489b2f8b64dea3af549cce118fce4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:04:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa20a8e704a7623b0b6c3b5d85f4fae2a87489b2f8b64dea3af549cce118fce4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:04:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa20a8e704a7623b0b6c3b5d85f4fae2a87489b2f8b64dea3af549cce118fce4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 15:04:55 compute-0 podman[219082]: 2026-02-01 15:04:55.4885274 +0000 UTC m=+0.161920395 container init f0bed975f6f73ddea146ab8f771ffdbf94c607fafa44a546ba2a6b15c6f1cc33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 01 15:04:55 compute-0 podman[219082]: 2026-02-01 15:04:55.500800074 +0000 UTC m=+0.174193039 container start f0bed975f6f73ddea146ab8f771ffdbf94c607fafa44a546ba2a6b15c6f1cc33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williams, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 01 15:04:55 compute-0 podman[219082]: 2026-02-01 15:04:55.505559337 +0000 UTC m=+0.178952332 container attach f0bed975f6f73ddea146ab8f771ffdbf94c607fafa44a546ba2a6b15c6f1cc33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williams, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:04:55 compute-0 sudo[219176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chuvvpxoduubeqnjhppzkkysycfublze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958295.2173924-244-267944055577345/AnsiballZ_command.py'
Feb 01 15:04:55 compute-0 sudo[219176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:04:55 compute-0 python3.9[219178]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:04:55 compute-0 sudo[219176]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:55 compute-0 nostalgic_williams[219144]: --> passed data devices: 0 physical, 3 LVM
Feb 01 15:04:55 compute-0 nostalgic_williams[219144]: --> All data devices are unavailable
Feb 01 15:04:55 compute-0 systemd[1]: libpod-f0bed975f6f73ddea146ab8f771ffdbf94c607fafa44a546ba2a6b15c6f1cc33.scope: Deactivated successfully.
Feb 01 15:04:55 compute-0 podman[219082]: 2026-02-01 15:04:55.896395621 +0000 UTC m=+0.569788586 container died f0bed975f6f73ddea146ab8f771ffdbf94c607fafa44a546ba2a6b15c6f1cc33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williams, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:04:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa20a8e704a7623b0b6c3b5d85f4fae2a87489b2f8b64dea3af549cce118fce4-merged.mount: Deactivated successfully.
Feb 01 15:04:55 compute-0 podman[219082]: 2026-02-01 15:04:55.937934974 +0000 UTC m=+0.611327959 container remove f0bed975f6f73ddea146ab8f771ffdbf94c607fafa44a546ba2a6b15c6f1cc33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_williams, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 01 15:04:55 compute-0 systemd[1]: libpod-conmon-f0bed975f6f73ddea146ab8f771ffdbf94c607fafa44a546ba2a6b15c6f1cc33.scope: Deactivated successfully.
Feb 01 15:04:55 compute-0 sudo[218900]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:56 compute-0 sudo[219271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:04:56 compute-0 sudo[219271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:04:56 compute-0 sudo[219271]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:56 compute-0 sudo[219308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 15:04:56 compute-0 sudo[219308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:04:56 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:56 compute-0 sudo[219406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omishhmvjfyzgknhwcbznpqmzvttipqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958295.9382195-252-89757356481316/AnsiballZ_lineinfile.py'
Feb 01 15:04:56 compute-0 sudo[219406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:56 compute-0 podman[219422]: 2026-02-01 15:04:56.307440321 +0000 UTC m=+0.042393658 container create 5c6f0e263e6f7e6f621d3052b3cadd36c00efdd92798b206d6db3f5c15aa8ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_liskov, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 01 15:04:56 compute-0 systemd[1]: Started libpod-conmon-5c6f0e263e6f7e6f621d3052b3cadd36c00efdd92798b206d6db3f5c15aa8ce0.scope.
Feb 01 15:04:56 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:04:56 compute-0 python3.9[219408]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:04:56 compute-0 podman[219422]: 2026-02-01 15:04:56.374426646 +0000 UTC m=+0.109379983 container init 5c6f0e263e6f7e6f621d3052b3cadd36c00efdd92798b206d6db3f5c15aa8ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_liskov, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:04:56 compute-0 podman[219422]: 2026-02-01 15:04:56.381500785 +0000 UTC m=+0.116454142 container start 5c6f0e263e6f7e6f621d3052b3cadd36c00efdd92798b206d6db3f5c15aa8ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 01 15:04:56 compute-0 hungry_liskov[219439]: 167 167
Feb 01 15:04:56 compute-0 podman[219422]: 2026-02-01 15:04:56.289156549 +0000 UTC m=+0.024109926 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:04:56 compute-0 systemd[1]: libpod-5c6f0e263e6f7e6f621d3052b3cadd36c00efdd92798b206d6db3f5c15aa8ce0.scope: Deactivated successfully.
Feb 01 15:04:56 compute-0 podman[219422]: 2026-02-01 15:04:56.385502997 +0000 UTC m=+0.120456364 container attach 5c6f0e263e6f7e6f621d3052b3cadd36c00efdd92798b206d6db3f5c15aa8ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_liskov, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:04:56 compute-0 sudo[219406]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:56 compute-0 podman[219422]: 2026-02-01 15:04:56.386067562 +0000 UTC m=+0.121020909 container died 5c6f0e263e6f7e6f621d3052b3cadd36c00efdd92798b206d6db3f5c15aa8ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_liskov, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 01 15:04:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cb7e329b08a989fd97b25919b17eda042e793033435d3e90d17c0217fe03374-merged.mount: Deactivated successfully.
Feb 01 15:04:56 compute-0 podman[219422]: 2026-02-01 15:04:56.42456675 +0000 UTC m=+0.159520087 container remove 5c6f0e263e6f7e6f621d3052b3cadd36c00efdd92798b206d6db3f5c15aa8ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_liskov, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:04:56 compute-0 systemd[1]: libpod-conmon-5c6f0e263e6f7e6f621d3052b3cadd36c00efdd92798b206d6db3f5c15aa8ce0.scope: Deactivated successfully.
Feb 01 15:04:56 compute-0 podman[219485]: 2026-02-01 15:04:56.569047866 +0000 UTC m=+0.045159696 container create e2ec3c923105ca983785272c01b6f4499664a698d75a838ae57e6e8b0178ba03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:04:56 compute-0 systemd[1]: Started libpod-conmon-e2ec3c923105ca983785272c01b6f4499664a698d75a838ae57e6e8b0178ba03.scope.
Feb 01 15:04:56 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:04:56 compute-0 podman[219485]: 2026-02-01 15:04:56.545061664 +0000 UTC m=+0.021173504 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e31a4066a9e7d413efeafaba4cfc25c0e434fe40829376d2d31829cf2d65dd7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e31a4066a9e7d413efeafaba4cfc25c0e434fe40829376d2d31829cf2d65dd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e31a4066a9e7d413efeafaba4cfc25c0e434fe40829376d2d31829cf2d65dd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e31a4066a9e7d413efeafaba4cfc25c0e434fe40829376d2d31829cf2d65dd7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:04:56 compute-0 podman[219485]: 2026-02-01 15:04:56.655720343 +0000 UTC m=+0.131832153 container init e2ec3c923105ca983785272c01b6f4499664a698d75a838ae57e6e8b0178ba03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 01 15:04:56 compute-0 podman[219485]: 2026-02-01 15:04:56.66384379 +0000 UTC m=+0.139955590 container start e2ec3c923105ca983785272c01b6f4499664a698d75a838ae57e6e8b0178ba03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sammet, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 01 15:04:56 compute-0 podman[219485]: 2026-02-01 15:04:56.667257786 +0000 UTC m=+0.143369616 container attach e2ec3c923105ca983785272c01b6f4499664a698d75a838ae57e6e8b0178ba03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sammet, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:04:56 compute-0 zealous_sammet[219532]: {
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:     "0": [
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:         {
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "devices": [
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "/dev/loop3"
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             ],
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "lv_name": "ceph_lv0",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "lv_size": "21470642176",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "name": "ceph_lv0",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "tags": {
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.cluster_name": "ceph",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.crush_device_class": "",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.encrypted": "0",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.objectstore": "bluestore",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.osd_id": "0",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.type": "block",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.vdo": "0",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.with_tpm": "0"
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             },
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "type": "block",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "vg_name": "ceph_vg0"
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:         }
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:     ],
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:     "1": [
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:         {
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "devices": [
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "/dev/loop4"
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             ],
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "lv_name": "ceph_lv1",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "lv_size": "21470642176",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "name": "ceph_lv1",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "tags": {
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.cluster_name": "ceph",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.crush_device_class": "",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.encrypted": "0",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.objectstore": "bluestore",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.osd_id": "1",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.type": "block",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.vdo": "0",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.with_tpm": "0"
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             },
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "type": "block",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "vg_name": "ceph_vg1"
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:         }
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:     ],
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:     "2": [
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:         {
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "devices": [
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "/dev/loop5"
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             ],
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "lv_name": "ceph_lv2",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "lv_size": "21470642176",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "name": "ceph_lv2",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "tags": {
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.cluster_name": "ceph",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.crush_device_class": "",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.encrypted": "0",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.objectstore": "bluestore",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.osd_id": "2",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.type": "block",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.vdo": "0",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:                 "ceph.with_tpm": "0"
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             },
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "type": "block",
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:             "vg_name": "ceph_vg2"
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:         }
Feb 01 15:04:56 compute-0 zealous_sammet[219532]:     ]
Feb 01 15:04:56 compute-0 zealous_sammet[219532]: }
Feb 01 15:04:56 compute-0 systemd[1]: libpod-e2ec3c923105ca983785272c01b6f4499664a698d75a838ae57e6e8b0178ba03.scope: Deactivated successfully.
Feb 01 15:04:56 compute-0 podman[219485]: 2026-02-01 15:04:56.949665413 +0000 UTC m=+0.425777213 container died e2ec3c923105ca983785272c01b6f4499664a698d75a838ae57e6e8b0178ba03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 01 15:04:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e31a4066a9e7d413efeafaba4cfc25c0e434fe40829376d2d31829cf2d65dd7-merged.mount: Deactivated successfully.
Feb 01 15:04:56 compute-0 podman[219485]: 2026-02-01 15:04:56.991803603 +0000 UTC m=+0.467915393 container remove e2ec3c923105ca983785272c01b6f4499664a698d75a838ae57e6e8b0178ba03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 01 15:04:56 compute-0 systemd[1]: libpod-conmon-e2ec3c923105ca983785272c01b6f4499664a698d75a838ae57e6e8b0178ba03.scope: Deactivated successfully.
Feb 01 15:04:57 compute-0 sudo[219308]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:57 compute-0 sudo[219601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:04:57 compute-0 sudo[219601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:04:57 compute-0 sudo[219601]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:57 compute-0 sudo[219650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 15:04:57 compute-0 sudo[219650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:04:57 compute-0 sudo[219695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhstrepejfzzmnvcrkgiuldibbwkedex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958296.586441-260-227525369576994/AnsiballZ_replace.py'
Feb 01 15:04:57 compute-0 sudo[219695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:57 compute-0 ceph-mon[75179]: pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:57 compute-0 python3.9[219699]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:04:57 compute-0 sudo[219695]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:57 compute-0 podman[219714]: 2026-02-01 15:04:57.388610464 +0000 UTC m=+0.040545547 container create 55ac16ce6946dfa0af4e95e31b976a2159e56691e8642f910172f1b6a01d4917 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Feb 01 15:04:57 compute-0 systemd[1]: Started libpod-conmon-55ac16ce6946dfa0af4e95e31b976a2159e56691e8642f910172f1b6a01d4917.scope.
Feb 01 15:04:57 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:04:57 compute-0 podman[219714]: 2026-02-01 15:04:57.464855889 +0000 UTC m=+0.116791002 container init 55ac16ce6946dfa0af4e95e31b976a2159e56691e8642f910172f1b6a01d4917 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Feb 01 15:04:57 compute-0 podman[219714]: 2026-02-01 15:04:57.372906834 +0000 UTC m=+0.024841957 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:04:57 compute-0 podman[219714]: 2026-02-01 15:04:57.470763054 +0000 UTC m=+0.122698137 container start 55ac16ce6946dfa0af4e95e31b976a2159e56691e8642f910172f1b6a01d4917 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 01 15:04:57 compute-0 podman[219714]: 2026-02-01 15:04:57.474472438 +0000 UTC m=+0.126407621 container attach 55ac16ce6946dfa0af4e95e31b976a2159e56691e8642f910172f1b6a01d4917 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bell, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 01 15:04:57 compute-0 cranky_bell[219754]: 167 167
Feb 01 15:04:57 compute-0 systemd[1]: libpod-55ac16ce6946dfa0af4e95e31b976a2159e56691e8642f910172f1b6a01d4917.scope: Deactivated successfully.
Feb 01 15:04:57 compute-0 conmon[219754]: conmon 55ac16ce6946dfa0af4e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-55ac16ce6946dfa0af4e95e31b976a2159e56691e8642f910172f1b6a01d4917.scope/container/memory.events
Feb 01 15:04:57 compute-0 podman[219780]: 2026-02-01 15:04:57.518646645 +0000 UTC m=+0.028145359 container died 55ac16ce6946dfa0af4e95e31b976a2159e56691e8642f910172f1b6a01d4917 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Feb 01 15:04:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d72881461fc54d6b8844138f160a7e75fffb7b1c2d7892aea9ed0e8bc3367d24-merged.mount: Deactivated successfully.
Feb 01 15:04:57 compute-0 podman[219780]: 2026-02-01 15:04:57.560450215 +0000 UTC m=+0.069948929 container remove 55ac16ce6946dfa0af4e95e31b976a2159e56691e8642f910172f1b6a01d4917 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:04:57 compute-0 systemd[1]: libpod-conmon-55ac16ce6946dfa0af4e95e31b976a2159e56691e8642f910172f1b6a01d4917.scope: Deactivated successfully.
Feb 01 15:04:57 compute-0 podman[219857]: 2026-02-01 15:04:57.729219851 +0000 UTC m=+0.041888754 container create bc3068e213f0cbc288d10b596ab0f217b15b5fde70c63cb1d8326a7ecd3d1c43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_babbage, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 01 15:04:57 compute-0 systemd[1]: Started libpod-conmon-bc3068e213f0cbc288d10b596ab0f217b15b5fde70c63cb1d8326a7ecd3d1c43.scope.
Feb 01 15:04:57 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55af17419364959b2d60b08844685a9f8db27ec8b37575a7f1e56d48bced133e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55af17419364959b2d60b08844685a9f8db27ec8b37575a7f1e56d48bced133e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55af17419364959b2d60b08844685a9f8db27ec8b37575a7f1e56d48bced133e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55af17419364959b2d60b08844685a9f8db27ec8b37575a7f1e56d48bced133e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:04:57 compute-0 sudo[219925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uehnsqdecrnzllfdjduwmbfcfjcjxgmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958297.495809-268-188003687677863/AnsiballZ_replace.py'
Feb 01 15:04:57 compute-0 sudo[219925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:57 compute-0 podman[219857]: 2026-02-01 15:04:57.803182812 +0000 UTC m=+0.115851755 container init bc3068e213f0cbc288d10b596ab0f217b15b5fde70c63cb1d8326a7ecd3d1c43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 01 15:04:57 compute-0 podman[219857]: 2026-02-01 15:04:57.711938427 +0000 UTC m=+0.024607330 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:04:57 compute-0 podman[219857]: 2026-02-01 15:04:57.812339569 +0000 UTC m=+0.125008472 container start bc3068e213f0cbc288d10b596ab0f217b15b5fde70c63cb1d8326a7ecd3d1c43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_babbage, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:04:57 compute-0 podman[219857]: 2026-02-01 15:04:57.818378928 +0000 UTC m=+0.131047971 container attach bc3068e213f0cbc288d10b596ab0f217b15b5fde70c63cb1d8326a7ecd3d1c43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_babbage, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Feb 01 15:04:57 compute-0 python3.9[219927]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:04:57 compute-0 sudo[219925]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:58 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:58 compute-0 lvm[220151]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:04:58 compute-0 lvm[220151]: VG ceph_vg0 finished
Feb 01 15:04:58 compute-0 lvm[220155]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:04:58 compute-0 lvm[220155]: VG ceph_vg1 finished
Feb 01 15:04:58 compute-0 sudo[220152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrhsltnrbdvogqdkmmvafrqredurrsfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958298.1801836-277-268685826212082/AnsiballZ_lineinfile.py'
Feb 01 15:04:58 compute-0 sudo[220152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:58 compute-0 lvm[220159]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:04:58 compute-0 lvm[220160]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:04:58 compute-0 lvm[220160]: VG ceph_vg0 finished
Feb 01 15:04:58 compute-0 lvm[220159]: VG ceph_vg2 finished
Feb 01 15:04:58 compute-0 jolly_babbage[219910]: {}
Feb 01 15:04:58 compute-0 systemd[1]: libpod-bc3068e213f0cbc288d10b596ab0f217b15b5fde70c63cb1d8326a7ecd3d1c43.scope: Deactivated successfully.
Feb 01 15:04:58 compute-0 systemd[1]: libpod-bc3068e213f0cbc288d10b596ab0f217b15b5fde70c63cb1d8326a7ecd3d1c43.scope: Consumed 1.125s CPU time.
Feb 01 15:04:58 compute-0 podman[219857]: 2026-02-01 15:04:58.580804287 +0000 UTC m=+0.893473190 container died bc3068e213f0cbc288d10b596ab0f217b15b5fde70c63cb1d8326a7ecd3d1c43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 01 15:04:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-55af17419364959b2d60b08844685a9f8db27ec8b37575a7f1e56d48bced133e-merged.mount: Deactivated successfully.
Feb 01 15:04:58 compute-0 python3.9[220157]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:04:58 compute-0 podman[219857]: 2026-02-01 15:04:58.627284398 +0000 UTC m=+0.939953321 container remove bc3068e213f0cbc288d10b596ab0f217b15b5fde70c63cb1d8326a7ecd3d1c43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:04:58 compute-0 sudo[220152]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:58 compute-0 systemd[1]: libpod-conmon-bc3068e213f0cbc288d10b596ab0f217b15b5fde70c63cb1d8326a7ecd3d1c43.scope: Deactivated successfully.
Feb 01 15:04:58 compute-0 sudo[219650]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:58 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:04:58 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:04:58 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:04:58 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:04:58 compute-0 sudo[220199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 15:04:58 compute-0 sudo[220199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:04:58 compute-0 sudo[220199]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:59 compute-0 sudo[220349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpfppuuyvyzkmgqtioqgrsmdsolhokpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958298.7681787-277-185457855466237/AnsiballZ_lineinfile.py'
Feb 01 15:04:59 compute-0 sudo[220349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:59 compute-0 ceph-mon[75179]: pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:04:59 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:04:59 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:04:59 compute-0 python3.9[220351]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:04:59 compute-0 sudo[220349]: pam_unix(sudo:session): session closed for user root
Feb 01 15:04:59 compute-0 sudo[220501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yopjtajpgovfrxvtirvkxiucmsfxnvda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958299.4103434-277-172303291718170/AnsiballZ_lineinfile.py'
Feb 01 15:04:59 compute-0 sudo[220501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:04:59 compute-0 python3.9[220503]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:04:59 compute-0 sudo[220501]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:00 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:00 compute-0 sudo[220653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ophkcjeweyqtrrnikvletocdrczudqtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958300.1039836-277-252849427293895/AnsiballZ_lineinfile.py'
Feb 01 15:05:00 compute-0 sudo[220653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:05:00 compute-0 python3.9[220655]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:05:00 compute-0 sudo[220653]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:00 compute-0 sudo[220805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmnlsyvampiqehuzqfhinhzybuysybwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958300.7240696-306-252102860437724/AnsiballZ_stat.py'
Feb 01 15:05:00 compute-0 sudo[220805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:01 compute-0 python3.9[220807]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 15:05:01 compute-0 ceph-mon[75179]: pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:01 compute-0 sudo[220805]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:01 compute-0 sudo[220959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsgvvvshuyfvyvsaityiversnkjuitdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958301.3115942-314-29404422722462/AnsiballZ_command.py'
Feb 01 15:05:01 compute-0 sudo[220959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:01 compute-0 python3.9[220961]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:05:01 compute-0 sudo[220959]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:02 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:02 compute-0 sudo[221112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuinizopcdejcynhsgqdefxmtnoqjexf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958301.984554-323-209579603192243/AnsiballZ_systemd_service.py'
Feb 01 15:05:02 compute-0 sudo[221112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:02 compute-0 python3.9[221114]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 15:05:02 compute-0 systemd[1]: Listening on multipathd control socket.
Feb 01 15:05:02 compute-0 sudo[221112]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:03 compute-0 sudo[221268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdiqgmwcgdfkeixlizrznjlwtweuugck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958302.8655553-331-131711249474605/AnsiballZ_systemd_service.py'
Feb 01 15:05:03 compute-0 sudo[221268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:03 compute-0 ceph-mon[75179]: pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:03 compute-0 python3.9[221270]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 15:05:03 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Feb 01 15:05:03 compute-0 udevadm[221275]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Feb 01 15:05:03 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Feb 01 15:05:03 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb 01 15:05:03 compute-0 multipathd[221279]: --------start up--------
Feb 01 15:05:03 compute-0 multipathd[221279]: read /etc/multipath.conf
Feb 01 15:05:03 compute-0 multipathd[221279]: path checkers start up
Feb 01 15:05:03 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb 01 15:05:03 compute-0 sudo[221268]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:04 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:04 compute-0 sudo[221436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjfpvhcbqgauvirlfkbjmewucrnofmee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958304.010264-343-205317218503226/AnsiballZ_file.py'
Feb 01 15:05:04 compute-0 sudo[221436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:04 compute-0 python3.9[221438]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb 01 15:05:04 compute-0 sudo[221436]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:04 compute-0 sudo[221588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qohtcuyuqjuuettdlydqeurburxiaqdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958304.7043424-351-46578208229752/AnsiballZ_modprobe.py'
Feb 01 15:05:04 compute-0 sudo[221588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:05 compute-0 python3.9[221590]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Feb 01 15:05:05 compute-0 kernel: Key type psk registered
Feb 01 15:05:05 compute-0 ceph-mon[75179]: pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:05 compute-0 sudo[221588]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:05:05 compute-0 sudo[221749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opmfqrfiblkkekmpvmsyyhzeyzzhtxyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958305.5295756-359-164621091947738/AnsiballZ_stat.py'
Feb 01 15:05:05 compute-0 sudo[221749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:05 compute-0 python3.9[221751]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:05:05 compute-0 sudo[221749]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:06 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:06 compute-0 sudo[221872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxcxusvijitquiydyoewrrwpflrgfkda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958305.5295756-359-164621091947738/AnsiballZ_copy.py'
Feb 01 15:05:06 compute-0 sudo[221872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:06 compute-0 python3.9[221874]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769958305.5295756-359-164621091947738/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:05:06 compute-0 sudo[221872]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:06 compute-0 sudo[222024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zucpzzqbqlvhqzflnrrwzxmyxdfokywd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958306.7025619-375-244375115144043/AnsiballZ_lineinfile.py'
Feb 01 15:05:06 compute-0 sudo[222024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:07 compute-0 python3.9[222026]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:05:07 compute-0 sudo[222024]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:07 compute-0 ceph-mon[75179]: pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:07 compute-0 sudo[222176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfohexdpytiwtvayinddgxzggihjovlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958307.341986-383-60525944492888/AnsiballZ_systemd.py'
Feb 01 15:05:07 compute-0 sudo[222176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:05:07.797 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:05:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:05:07.798 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:05:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:05:07.798 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:05:07 compute-0 python3.9[222178]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 01 15:05:07 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 01 15:05:07 compute-0 systemd[1]: Stopped Load Kernel Modules.
Feb 01 15:05:07 compute-0 systemd[1]: Stopping Load Kernel Modules...
Feb 01 15:05:07 compute-0 systemd[1]: Starting Load Kernel Modules...
Feb 01 15:05:07 compute-0 systemd[1]: Finished Load Kernel Modules.
Feb 01 15:05:07 compute-0 sudo[222176]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:08 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:08 compute-0 sudo[222332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbwcinnrmkgqtfxavakqpzingwkkpant ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958308.1911793-391-139613854210126/AnsiballZ_dnf.py'
Feb 01 15:05:08 compute-0 sudo[222332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:08 compute-0 python3.9[222334]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 01 15:05:09 compute-0 ceph-mon[75179]: pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:10 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:05:10 compute-0 podman[222339]: 2026-02-01 15:05:10.990360676 +0000 UTC m=+0.074991671 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 01 15:05:11 compute-0 podman[222340]: 2026-02-01 15:05:11.007167957 +0000 UTC m=+0.086414641 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 01 15:05:11 compute-0 systemd[1]: Reloading.
Feb 01 15:05:11 compute-0 systemd-rc-local-generator[222410]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:05:11 compute-0 systemd-sysv-generator[222413]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:05:11 compute-0 ceph-mon[75179]: pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:11 compute-0 systemd[1]: Reloading.
Feb 01 15:05:11 compute-0 systemd-rc-local-generator[222443]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:05:11 compute-0 systemd-sysv-generator[222449]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:05:11 compute-0 systemd-logind[786]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 01 15:05:11 compute-0 systemd-logind[786]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb 01 15:05:11 compute-0 lvm[222490]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:05:11 compute-0 lvm[222490]: VG ceph_vg0 finished
Feb 01 15:05:11 compute-0 lvm[222491]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:05:11 compute-0 lvm[222491]: VG ceph_vg2 finished
Feb 01 15:05:11 compute-0 lvm[222492]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:05:11 compute-0 lvm[222492]: VG ceph_vg1 finished
Feb 01 15:05:11 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 01 15:05:11 compute-0 systemd[1]: Starting man-db-cache-update.service...
Feb 01 15:05:11 compute-0 systemd[1]: Reloading.
Feb 01 15:05:12 compute-0 systemd-sysv-generator[222549]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:05:12 compute-0 systemd-rc-local-generator[222546]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:05:12 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:12 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 01 15:05:12 compute-0 sudo[222332]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:12 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 01 15:05:12 compute-0 systemd[1]: Finished man-db-cache-update.service.
Feb 01 15:05:12 compute-0 systemd[1]: run-r233f14e7165b4044adbfb0376f2b3273.service: Deactivated successfully.
Feb 01 15:05:13 compute-0 sudo[223846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owqokqxmeakcsbztpmvsgkyubfbjkpka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958312.7693756-399-199776042153401/AnsiballZ_systemd_service.py'
Feb 01 15:05:13 compute-0 sudo[223846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:13 compute-0 ceph-mon[75179]: pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:13 compute-0 python3.9[223848]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 01 15:05:13 compute-0 iscsid[216691]: iscsid shutting down.
Feb 01 15:05:13 compute-0 systemd[1]: Stopping Open-iSCSI...
Feb 01 15:05:13 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Feb 01 15:05:13 compute-0 systemd[1]: Stopped Open-iSCSI.
Feb 01 15:05:13 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb 01 15:05:13 compute-0 systemd[1]: Starting Open-iSCSI...
Feb 01 15:05:13 compute-0 systemd[1]: Started Open-iSCSI.
Feb 01 15:05:13 compute-0 sudo[223846]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:14 compute-0 sudo[224002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqykqeshwfzjrwacfhujucdlfrgjvbsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958313.6199634-407-135015000389215/AnsiballZ_systemd_service.py'
Feb 01 15:05:14 compute-0 sudo[224002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:14 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:14 compute-0 python3.9[224004]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 01 15:05:14 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Feb 01 15:05:14 compute-0 multipathd[221279]: exit (signal)
Feb 01 15:05:14 compute-0 multipathd[221279]: --------shut down-------
Feb 01 15:05:14 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Feb 01 15:05:14 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Feb 01 15:05:14 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb 01 15:05:14 compute-0 multipathd[224011]: --------start up--------
Feb 01 15:05:14 compute-0 multipathd[224011]: read /etc/multipath.conf
Feb 01 15:05:14 compute-0 multipathd[224011]: path checkers start up
Feb 01 15:05:14 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb 01 15:05:14 compute-0 sudo[224002]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:15 compute-0 python3.9[224168]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 01 15:05:15 compute-0 ceph-mon[75179]: pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:05:16 compute-0 sudo[224322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alissrezkbxqgrsasofitehepmudzgnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958315.70431-425-81568206668155/AnsiballZ_file.py'
Feb 01 15:05:16 compute-0 sudo[224322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:16 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:16 compute-0 python3.9[224324]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:05:16 compute-0 sudo[224322]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:16 compute-0 sudo[224474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pklokpezycoeuuotzvjfbfeangxjzhoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958316.6265604-436-86771724697542/AnsiballZ_systemd_service.py'
Feb 01 15:05:16 compute-0 sudo[224474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:17 compute-0 ceph-mon[75179]: pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:17 compute-0 python3.9[224476]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 01 15:05:17 compute-0 systemd[1]: Reloading.
Feb 01 15:05:17 compute-0 systemd-sysv-generator[224501]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:05:17 compute-0 systemd-rc-local-generator[224497]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:05:17 compute-0 sudo[224474]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:05:17
Feb 01 15:05:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:05:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:05:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'backups', '.rgw.root', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'vms']
Feb 01 15:05:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:05:18 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:18 compute-0 python3.9[224661]: ansible-ansible.builtin.service_facts Invoked
Feb 01 15:05:18 compute-0 network[224678]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 01 15:05:18 compute-0 network[224679]: 'network-scripts' will be removed from distribution in near future.
Feb 01 15:05:18 compute-0 network[224680]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 01 15:05:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:05:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:05:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:05:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:05:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:05:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:05:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:05:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:05:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:05:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:05:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:05:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:05:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:05:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:05:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:05:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:05:19 compute-0 ceph-mon[75179]: pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:20 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:05:21 compute-0 sudo[224951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lebyvcefyaojznhmvxvwwuddqbveumkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958320.8776734-455-43652579809562/AnsiballZ_systemd_service.py'
Feb 01 15:05:21 compute-0 sudo[224951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:21 compute-0 ceph-mon[75179]: pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:21 compute-0 python3.9[224953]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 15:05:21 compute-0 sudo[224951]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:22 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:22 compute-0 sudo[225104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enanwcgtgamzuulpbsogwixzeebbgipk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958321.667596-455-42941770523969/AnsiballZ_systemd_service.py'
Feb 01 15:05:22 compute-0 sudo[225104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:22 compute-0 python3.9[225106]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 15:05:22 compute-0 sudo[225104]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:22 compute-0 sudo[225257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrhkjwhhnfvxoabxlloezjzmpwaijscb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958322.6762998-455-210542487653743/AnsiballZ_systemd_service.py'
Feb 01 15:05:22 compute-0 sudo[225257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:23 compute-0 python3.9[225259]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 15:05:23 compute-0 ceph-mon[75179]: pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:23 compute-0 sudo[225257]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:23 compute-0 sudo[225410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sllgumnmopkjufoxkjxbyysczbyrjtio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958323.3688905-455-35206417066753/AnsiballZ_systemd_service.py'
Feb 01 15:05:23 compute-0 sudo[225410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:23 compute-0 python3.9[225412]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 15:05:23 compute-0 sudo[225410]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:24 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:24 compute-0 sudo[225563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyratrfimaarxciexpfkmjofvpqzndkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958324.0698273-455-227996369555774/AnsiballZ_systemd_service.py'
Feb 01 15:05:24 compute-0 sudo[225563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:24 compute-0 python3.9[225565]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 15:05:24 compute-0 sudo[225563]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:25 compute-0 sudo[225716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgheuggzotsusvvshmzxwuutdqbppfvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958324.8409727-455-104188893996681/AnsiballZ_systemd_service.py'
Feb 01 15:05:25 compute-0 sudo[225716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:25 compute-0 ceph-mon[75179]: pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:25 compute-0 python3.9[225718]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 15:05:25 compute-0 sudo[225716]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:05:25 compute-0 sudo[225869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxxcgnhpqiiipleysorqpwfjdmwcqqjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958325.5970469-455-254965546557636/AnsiballZ_systemd_service.py'
Feb 01 15:05:25 compute-0 sudo[225869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:26 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:26 compute-0 python3.9[225871]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 15:05:26 compute-0 sudo[225869]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:26 compute-0 sudo[226022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otorxorjuwjcrwhxirdhvkjfpjbjmvpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958326.3811607-455-274435382085682/AnsiballZ_systemd_service.py'
Feb 01 15:05:26 compute-0 sudo[226022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:26 compute-0 python3.9[226024]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 15:05:26 compute-0 sudo[226022]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:27 compute-0 ceph-mon[75179]: pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:27 compute-0 sudo[226175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etmempanuryrdshwenhpjdozyypyxgio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958327.3023326-514-99019080882937/AnsiballZ_file.py'
Feb 01 15:05:27 compute-0 sudo[226175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:27 compute-0 python3.9[226177]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:05:27 compute-0 sudo[226175]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:05:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:05:28 compute-0 sudo[226327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yydkdnhhnrrxskdgcrnzigqnyzdgloxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958327.8729186-514-109695614450974/AnsiballZ_file.py'
Feb 01 15:05:28 compute-0 sudo[226327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:28 compute-0 python3.9[226329]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:05:28 compute-0 sudo[226327]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:28 compute-0 sudo[226479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nigppqmkdaaqtcwodwmnbobpdlqejpsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958328.5002034-514-274258363373043/AnsiballZ_file.py'
Feb 01 15:05:28 compute-0 sudo[226479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:29 compute-0 python3.9[226481]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:05:29 compute-0 sudo[226479]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:29 compute-0 ceph-mon[75179]: pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:29 compute-0 sudo[226631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kngrbpsweuqqzjqoeisdrfdjdnilmezl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958329.1708226-514-13776653511052/AnsiballZ_file.py'
Feb 01 15:05:29 compute-0 sudo[226631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:29 compute-0 python3.9[226633]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:05:29 compute-0 sudo[226631]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:29 compute-0 sudo[226783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnjleznipvgvwddrajarkbxibmtdazxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958329.7168205-514-93862316139535/AnsiballZ_file.py'
Feb 01 15:05:29 compute-0 sudo[226783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:30 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:30 compute-0 python3.9[226785]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:05:30 compute-0 sudo[226783]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:05:30 compute-0 sudo[226935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecyuecqqfixyngtbswqpthdgvolecoil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958330.2922878-514-246542999136564/AnsiballZ_file.py'
Feb 01 15:05:30 compute-0 sudo[226935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:30 compute-0 python3.9[226937]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:05:30 compute-0 sudo[226935]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:31 compute-0 sudo[227087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cymyyvqbdmivtzblkhoubhzbkqtwmoch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958330.9775836-514-263625514686014/AnsiballZ_file.py'
Feb 01 15:05:31 compute-0 sudo[227087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:31 compute-0 ceph-mon[75179]: pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:31 compute-0 python3.9[227089]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:05:31 compute-0 sudo[227087]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:31 compute-0 sudo[227239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmkngiqozjmgzbkclqeeebigzqhskvdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958331.650078-514-158549065045910/AnsiballZ_file.py'
Feb 01 15:05:31 compute-0 sudo[227239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:32 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:32 compute-0 python3.9[227241]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:05:32 compute-0 sudo[227239]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:32 compute-0 sudo[227391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aozdvulisxtkjlmvcigrrwywsrvcoain ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958332.298419-571-13771719547229/AnsiballZ_file.py'
Feb 01 15:05:32 compute-0 sudo[227391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:32 compute-0 python3.9[227393]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:05:32 compute-0 sudo[227391]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:33 compute-0 sudo[227543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqiaxisowimjuqdniwzozvoaghpjfcvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958332.8976033-571-88370454579134/AnsiballZ_file.py'
Feb 01 15:05:33 compute-0 sudo[227543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:33 compute-0 ceph-mon[75179]: pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:33 compute-0 python3.9[227545]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:05:33 compute-0 sudo[227543]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:33 compute-0 sudo[227695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oatxghfmhpltavlyxtvtjtlbrugccvvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958333.4464035-571-48557017431358/AnsiballZ_file.py'
Feb 01 15:05:33 compute-0 sudo[227695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:33 compute-0 python3.9[227697]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:05:33 compute-0 sudo[227695]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:34 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:34 compute-0 sudo[227847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwbdxcazincfjcmzdcgprjvisftodbjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958334.0188372-571-18447227402799/AnsiballZ_file.py'
Feb 01 15:05:34 compute-0 sudo[227847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:34 compute-0 python3.9[227849]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:05:34 compute-0 sudo[227847]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:34 compute-0 sudo[227999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysjxdhicqyzmphvtpytpjfsmitywxrka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958334.575511-571-258554831789491/AnsiballZ_file.py'
Feb 01 15:05:34 compute-0 sudo[227999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:35 compute-0 python3.9[228001]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:05:35 compute-0 sudo[227999]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:35 compute-0 ceph-mon[75179]: pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:35 compute-0 sudo[228151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqesxaltzmoyisgmjrofqupkgnbordod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958335.210218-571-147160027930809/AnsiballZ_file.py'
Feb 01 15:05:35 compute-0 sudo[228151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:05:35 compute-0 python3.9[228153]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:05:35 compute-0 sudo[228151]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:36 compute-0 sudo[228303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plvbvyeukfnuhfbquxylvivpgfspedbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958335.792741-571-258123095508506/AnsiballZ_file.py'
Feb 01 15:05:36 compute-0 sudo[228303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:36 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:36 compute-0 python3.9[228305]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:05:36 compute-0 sudo[228303]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:36 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Feb 01 15:05:36 compute-0 sudo[228456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxawtozaedfbvkakvyagzestbhzrrzky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958336.3464353-571-166198878785902/AnsiballZ_file.py'
Feb 01 15:05:36 compute-0 sudo[228456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:36 compute-0 python3.9[228458]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:05:36 compute-0 sudo[228456]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:37 compute-0 sudo[228608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axvlroqifovnuxhqoclglknfanatxuke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958336.9878287-629-120052309518320/AnsiballZ_command.py'
Feb 01 15:05:37 compute-0 sudo[228608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:37 compute-0 ceph-mon[75179]: pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:37 compute-0 python3.9[228610]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:05:37 compute-0 sudo[228608]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:37 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 01 15:05:38 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:38 compute-0 python3.9[228763]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 01 15:05:38 compute-0 sudo[228913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muwkvxzfgjynulbjbokjngrsyrbwkkcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958338.4119992-647-239832426810303/AnsiballZ_systemd_service.py'
Feb 01 15:05:38 compute-0 sudo[228913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:38 compute-0 python3.9[228915]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 01 15:05:38 compute-0 systemd[1]: Reloading.
Feb 01 15:05:39 compute-0 systemd-sysv-generator[228940]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:05:39 compute-0 systemd-rc-local-generator[228933]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:05:39 compute-0 sudo[228913]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:39 compute-0 ceph-mon[75179]: pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:39 compute-0 sudo[229100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmwkzefwrqsabefjhlgyhbyydlkahmtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958339.4259315-655-215769254346222/AnsiballZ_command.py'
Feb 01 15:05:39 compute-0 sudo[229100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:39 compute-0 python3.9[229102]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:05:39 compute-0 sudo[229100]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:40 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:40 compute-0 sudo[229253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftlubbkmeljnepmwdqezqdmgglmgqflj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958340.0635195-655-38479391146638/AnsiballZ_command.py'
Feb 01 15:05:40 compute-0 sudo[229253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:40 compute-0 ceph-mon[75179]: pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:40 compute-0 python3.9[229255]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:05:40 compute-0 sudo[229253]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:05:40 compute-0 sudo[229406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuwbbfnfrmvuabgohjcgxvtycsxojekc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958340.636675-655-608863238189/AnsiballZ_command.py'
Feb 01 15:05:40 compute-0 sudo[229406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:41 compute-0 python3.9[229408]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:05:41 compute-0 sudo[229406]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:41 compute-0 podman[229410]: 2026-02-01 15:05:41.144993992 +0000 UTC m=+0.064755824 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127)
Feb 01 15:05:41 compute-0 podman[229411]: 2026-02-01 15:05:41.218767098 +0000 UTC m=+0.134331333 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Feb 01 15:05:41 compute-0 sudo[229604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgpcuhtiragmzeslvuyrufoahzcqpamo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958341.2370124-655-30458249325088/AnsiballZ_command.py'
Feb 01 15:05:41 compute-0 sudo[229604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:41 compute-0 python3.9[229606]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:05:41 compute-0 sudo[229604]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:42 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:42 compute-0 sudo[229757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnmzgfekuijdtilgevupekqhymkfztzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958341.8311465-655-143471598695300/AnsiballZ_command.py'
Feb 01 15:05:42 compute-0 sudo[229757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:42 compute-0 python3.9[229759]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:05:42 compute-0 sudo[229757]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:42 compute-0 sudo[229910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zguetozhooqsjstiaycxrniovywkwllr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958342.4003315-655-71537737520699/AnsiballZ_command.py'
Feb 01 15:05:42 compute-0 sudo[229910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:42 compute-0 python3.9[229912]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:05:42 compute-0 sudo[229910]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:43 compute-0 ceph-mon[75179]: pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:43 compute-0 sudo[230063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pibjhfkotytactwuevcmzymwlnecnwud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958342.9609406-655-273494645053728/AnsiballZ_command.py'
Feb 01 15:05:43 compute-0 sudo[230063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:43 compute-0 python3.9[230065]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:05:43 compute-0 sudo[230063]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:43 compute-0 sudo[230216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-matczheijkpwvmutuolsnlhmhqrehwwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958343.513098-655-110455075428075/AnsiballZ_command.py'
Feb 01 15:05:43 compute-0 sudo[230216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:43 compute-0 python3.9[230218]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 01 15:05:43 compute-0 sudo[230216]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:44 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:45 compute-0 sudo[230369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgivgtwqegzkkwbxblkuwavabsnhwnnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958344.949059-734-48972631624999/AnsiballZ_file.py'
Feb 01 15:05:45 compute-0 ceph-mon[75179]: pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:45 compute-0 sudo[230369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:45 compute-0 python3.9[230371]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:05:45 compute-0 sudo[230369]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:05:45 compute-0 sudo[230521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frlqhcybhffijwbbpxxudepavnohzlcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958345.4824631-734-242552043802277/AnsiballZ_file.py'
Feb 01 15:05:45 compute-0 sudo[230521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:46 compute-0 python3.9[230523]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:05:46 compute-0 sudo[230521]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:46 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:46 compute-0 sudo[230673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzwqnwrvaabqkdjvjlpsfaimubdbqypx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958346.2049065-734-97199806878064/AnsiballZ_file.py'
Feb 01 15:05:46 compute-0 sudo[230673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:46 compute-0 python3.9[230675]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:05:46 compute-0 sudo[230673]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:46 compute-0 sudo[230825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zipkyckzzuymgusnpempqxardxduryen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958346.7650344-756-142104388062615/AnsiballZ_file.py'
Feb 01 15:05:46 compute-0 sudo[230825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:47 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Feb 01 15:05:47 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Feb 01 15:05:47 compute-0 python3.9[230827]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:05:47 compute-0 sudo[230825]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:47 compute-0 ceph-mon[75179]: pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:47 compute-0 sudo[230980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svlbvnyoqugxkrsrtblisduinscityip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958347.2899768-756-60458488369534/AnsiballZ_file.py'
Feb 01 15:05:47 compute-0 sudo[230980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:47 compute-0 python3.9[230982]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:05:47 compute-0 sudo[230980]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:48 compute-0 sudo[231132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpkvondsjulaitgesybaewvogfkvhtnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958347.8831685-756-182794716209723/AnsiballZ_file.py'
Feb 01 15:05:48 compute-0 sudo[231132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:48 compute-0 python3.9[231134]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:05:48 compute-0 sudo[231132]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:05:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:05:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:05:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:05:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:05:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:05:48 compute-0 sudo[231284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tassghzjhelzztoqheducvrnqoemwcvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958348.6472223-756-214157275790310/AnsiballZ_file.py'
Feb 01 15:05:48 compute-0 sudo[231284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:49 compute-0 python3.9[231286]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:05:49 compute-0 sudo[231284]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:49 compute-0 ceph-mon[75179]: pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:49 compute-0 sudo[231436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkakgxrhypugbhmrwhpjdblpdnlbrrty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958349.2186615-756-275124874915187/AnsiballZ_file.py'
Feb 01 15:05:49 compute-0 sudo[231436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:49 compute-0 python3.9[231438]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:05:49 compute-0 sudo[231436]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:50 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:50 compute-0 sudo[231588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzuglxiktzocrfvofxhgjjffsfxappyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958349.9326916-756-111759539148237/AnsiballZ_file.py'
Feb 01 15:05:50 compute-0 sudo[231588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:50 compute-0 python3.9[231590]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:05:50 compute-0 sudo[231588]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:05:50 compute-0 sudo[231740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwgztvockggfdmxaxvycaaoedljdhyzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958350.601581-756-121084334306176/AnsiballZ_file.py'
Feb 01 15:05:50 compute-0 sudo[231740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:51 compute-0 python3.9[231742]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:05:51 compute-0 sudo[231740]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:51 compute-0 ceph-mon[75179]: pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:52 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:53 compute-0 ceph-mon[75179]: pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:54 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:55 compute-0 ceph-mon[75179]: pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:05:56 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:56 compute-0 sudo[231892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqboowhlvfrrpmqdzrutwgzqccwypkiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958355.8173368-945-33496158412113/AnsiballZ_getent.py'
Feb 01 15:05:56 compute-0 sudo[231892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:56 compute-0 python3.9[231894]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Feb 01 15:05:56 compute-0 sudo[231892]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:57 compute-0 sudo[232045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxrzubrgnxgolgcjxqsdcfrmmnwbxzwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958356.6202407-953-161164171528030/AnsiballZ_group.py'
Feb 01 15:05:57 compute-0 sudo[232045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:57 compute-0 python3.9[232047]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 01 15:05:57 compute-0 groupadd[232048]: group added to /etc/group: name=nova, GID=42436
Feb 01 15:05:57 compute-0 groupadd[232048]: group added to /etc/gshadow: name=nova
Feb 01 15:05:57 compute-0 ceph-mon[75179]: pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:57 compute-0 groupadd[232048]: new group: name=nova, GID=42436
Feb 01 15:05:57 compute-0 sudo[232045]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:57 compute-0 sudo[232203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aisrqgmyjznsmxhybljvxhuwdhplkhcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958357.4186816-961-17919038094901/AnsiballZ_user.py'
Feb 01 15:05:57 compute-0 sudo[232203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:05:58 compute-0 python3.9[232205]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 01 15:05:58 compute-0 useradd[232207]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Feb 01 15:05:58 compute-0 useradd[232207]: add 'nova' to group 'libvirt'
Feb 01 15:05:58 compute-0 useradd[232207]: add 'nova' to shadow group 'libvirt'
Feb 01 15:05:58 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:58 compute-0 sudo[232203]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:58 compute-0 sudo[232238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:05:58 compute-0 sudo[232238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:05:58 compute-0 sudo[232238]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:58 compute-0 sudo[232263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 15:05:58 compute-0 sudo[232263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:05:59 compute-0 sshd-session[232288]: Accepted publickey for zuul from 192.168.122.30 port 54014 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 15:05:59 compute-0 systemd-logind[786]: New session 50 of user zuul.
Feb 01 15:05:59 compute-0 systemd[1]: Started Session 50 of User zuul.
Feb 01 15:05:59 compute-0 sshd-session[232288]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 15:05:59 compute-0 sshd-session[232304]: Received disconnect from 192.168.122.30 port 54014:11: disconnected by user
Feb 01 15:05:59 compute-0 sshd-session[232304]: Disconnected from user zuul 192.168.122.30 port 54014
Feb 01 15:05:59 compute-0 sshd-session[232288]: pam_unix(sshd:session): session closed for user zuul
Feb 01 15:05:59 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Feb 01 15:05:59 compute-0 systemd-logind[786]: Session 50 logged out. Waiting for processes to exit.
Feb 01 15:05:59 compute-0 systemd-logind[786]: Removed session 50.
Feb 01 15:05:59 compute-0 ceph-mon[75179]: pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:05:59 compute-0 sudo[232263]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:59 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:05:59 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:05:59 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 15:05:59 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:05:59 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 15:05:59 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:05:59 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 15:05:59 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:05:59 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 15:05:59 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:05:59 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:05:59 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:05:59 compute-0 sudo[232346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:05:59 compute-0 sudo[232346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:05:59 compute-0 sudo[232346]: pam_unix(sudo:session): session closed for user root
Feb 01 15:05:59 compute-0 sudo[232394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 15:05:59 compute-0 sudo[232394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:05:59 compute-0 podman[232483]: 2026-02-01 15:05:59.702566058 +0000 UTC m=+0.048896371 container create f7210b7abd3a8ba94736dcae8e11116195ffbab28cc1354fef2ae10859246745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 01 15:05:59 compute-0 systemd[1]: Started libpod-conmon-f7210b7abd3a8ba94736dcae8e11116195ffbab28cc1354fef2ae10859246745.scope.
Feb 01 15:05:59 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:05:59 compute-0 podman[232483]: 2026-02-01 15:05:59.76469977 +0000 UTC m=+0.111030123 container init f7210b7abd3a8ba94736dcae8e11116195ffbab28cc1354fef2ae10859246745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_satoshi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 01 15:05:59 compute-0 podman[232483]: 2026-02-01 15:05:59.774863275 +0000 UTC m=+0.121193578 container start f7210b7abd3a8ba94736dcae8e11116195ffbab28cc1354fef2ae10859246745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 01 15:05:59 compute-0 flamboyant_satoshi[232550]: 167 167
Feb 01 15:05:59 compute-0 podman[232483]: 2026-02-01 15:05:59.683394681 +0000 UTC m=+0.029725084 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:05:59 compute-0 podman[232483]: 2026-02-01 15:05:59.778227269 +0000 UTC m=+0.124557582 container attach f7210b7abd3a8ba94736dcae8e11116195ffbab28cc1354fef2ae10859246745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_satoshi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 01 15:05:59 compute-0 systemd[1]: libpod-f7210b7abd3a8ba94736dcae8e11116195ffbab28cc1354fef2ae10859246745.scope: Deactivated successfully.
Feb 01 15:05:59 compute-0 conmon[232550]: conmon f7210b7abd3a8ba94736 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f7210b7abd3a8ba94736dcae8e11116195ffbab28cc1354fef2ae10859246745.scope/container/memory.events
Feb 01 15:05:59 compute-0 podman[232483]: 2026-02-01 15:05:59.779786853 +0000 UTC m=+0.126117166 container died f7210b7abd3a8ba94736dcae8e11116195ffbab28cc1354fef2ae10859246745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 01 15:05:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-4692f35500e3ce750378469c0f5d72f700c0ec8ea09e86e301e1520bd5e0dbb3-merged.mount: Deactivated successfully.
Feb 01 15:05:59 compute-0 podman[232483]: 2026-02-01 15:05:59.812389526 +0000 UTC m=+0.158719839 container remove f7210b7abd3a8ba94736dcae8e11116195ffbab28cc1354fef2ae10859246745 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_satoshi, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 01 15:05:59 compute-0 systemd[1]: libpod-conmon-f7210b7abd3a8ba94736dcae8e11116195ffbab28cc1354fef2ae10859246745.scope: Deactivated successfully.
Feb 01 15:05:59 compute-0 python3.9[232549]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:05:59 compute-0 podman[232574]: 2026-02-01 15:05:59.939806958 +0000 UTC m=+0.039545960 container create c2fdd702453c1ecad038d324bc6e86b25e90a315f126ea8c992f17d909c65323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_sutherland, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:05:59 compute-0 systemd[1]: Started libpod-conmon-c2fdd702453c1ecad038d324bc6e86b25e90a315f126ea8c992f17d909c65323.scope.
Feb 01 15:05:59 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d3a5065935f54f304c480070798a7d7704ff5357d9bc7446e8cbef005daa06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d3a5065935f54f304c480070798a7d7704ff5357d9bc7446e8cbef005daa06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d3a5065935f54f304c480070798a7d7704ff5357d9bc7446e8cbef005daa06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d3a5065935f54f304c480070798a7d7704ff5357d9bc7446e8cbef005daa06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d3a5065935f54f304c480070798a7d7704ff5357d9bc7446e8cbef005daa06/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 15:06:00 compute-0 podman[232574]: 2026-02-01 15:06:00.009064969 +0000 UTC m=+0.108803981 container init c2fdd702453c1ecad038d324bc6e86b25e90a315f126ea8c992f17d909c65323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:06:00 compute-0 podman[232574]: 2026-02-01 15:06:00.014161132 +0000 UTC m=+0.113900144 container start c2fdd702453c1ecad038d324bc6e86b25e90a315f126ea8c992f17d909c65323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 01 15:06:00 compute-0 podman[232574]: 2026-02-01 15:06:00.017417733 +0000 UTC m=+0.117156745 container attach c2fdd702453c1ecad038d324bc6e86b25e90a315f126ea8c992f17d909c65323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_sutherland, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 01 15:06:00 compute-0 podman[232574]: 2026-02-01 15:05:59.926955258 +0000 UTC m=+0.026694280 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:06:00 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:00 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:06:00 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:06:00 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:06:00 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:06:00 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:06:00 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:06:00 compute-0 vigilant_sutherland[232614]: --> passed data devices: 0 physical, 3 LVM
Feb 01 15:06:00 compute-0 vigilant_sutherland[232614]: --> All data devices are unavailable
Feb 01 15:06:00 compute-0 python3.9[232718]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769958359.3657653-986-69621164400684/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:06:00 compute-0 podman[232574]: 2026-02-01 15:06:00.410939293 +0000 UTC m=+0.510678295 container died c2fdd702453c1ecad038d324bc6e86b25e90a315f126ea8c992f17d909c65323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_sutherland, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 01 15:06:00 compute-0 systemd[1]: libpod-c2fdd702453c1ecad038d324bc6e86b25e90a315f126ea8c992f17d909c65323.scope: Deactivated successfully.
Feb 01 15:06:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-85d3a5065935f54f304c480070798a7d7704ff5357d9bc7446e8cbef005daa06-merged.mount: Deactivated successfully.
Feb 01 15:06:00 compute-0 podman[232574]: 2026-02-01 15:06:00.442086576 +0000 UTC m=+0.541825578 container remove c2fdd702453c1ecad038d324bc6e86b25e90a315f126ea8c992f17d909c65323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_sutherland, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Feb 01 15:06:00 compute-0 systemd[1]: libpod-conmon-c2fdd702453c1ecad038d324bc6e86b25e90a315f126ea8c992f17d909c65323.scope: Deactivated successfully.
Feb 01 15:06:00 compute-0 sudo[232394]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:00 compute-0 sudo[232767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:06:00 compute-0 sudo[232767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:06:00 compute-0 sudo[232767]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:06:00 compute-0 sudo[232800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 15:06:00 compute-0 sudo[232800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:06:00 compute-0 podman[232956]: 2026-02-01 15:06:00.791777188 +0000 UTC m=+0.034717924 container create 25c2b2e7f00c429b69b1e5c8bc9521e201d78ba3d9adaa06dcdd014ce57f1eda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:06:00 compute-0 systemd[1]: Started libpod-conmon-25c2b2e7f00c429b69b1e5c8bc9521e201d78ba3d9adaa06dcdd014ce57f1eda.scope.
Feb 01 15:06:00 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:06:00 compute-0 podman[232956]: 2026-02-01 15:06:00.854824075 +0000 UTC m=+0.097764831 container init 25c2b2e7f00c429b69b1e5c8bc9521e201d78ba3d9adaa06dcdd014ce57f1eda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Feb 01 15:06:00 compute-0 podman[232956]: 2026-02-01 15:06:00.859485686 +0000 UTC m=+0.102426422 container start 25c2b2e7f00c429b69b1e5c8bc9521e201d78ba3d9adaa06dcdd014ce57f1eda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:06:00 compute-0 podman[232956]: 2026-02-01 15:06:00.862249473 +0000 UTC m=+0.105190239 container attach 25c2b2e7f00c429b69b1e5c8bc9521e201d78ba3d9adaa06dcdd014ce57f1eda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 01 15:06:00 compute-0 nostalgic_mclean[232972]: 167 167
Feb 01 15:06:00 compute-0 systemd[1]: libpod-25c2b2e7f00c429b69b1e5c8bc9521e201d78ba3d9adaa06dcdd014ce57f1eda.scope: Deactivated successfully.
Feb 01 15:06:00 compute-0 podman[232956]: 2026-02-01 15:06:00.863760206 +0000 UTC m=+0.106700942 container died 25c2b2e7f00c429b69b1e5c8bc9521e201d78ba3d9adaa06dcdd014ce57f1eda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 01 15:06:00 compute-0 podman[232956]: 2026-02-01 15:06:00.778143726 +0000 UTC m=+0.021084482 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:06:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-12327ab51599c68be30c6cac0b27df374993e0c4fa0a7a2448b2aadfad33faaa-merged.mount: Deactivated successfully.
Feb 01 15:06:00 compute-0 podman[232956]: 2026-02-01 15:06:00.889314432 +0000 UTC m=+0.132255168 container remove 25c2b2e7f00c429b69b1e5c8bc9521e201d78ba3d9adaa06dcdd014ce57f1eda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:06:00 compute-0 python3.9[232949]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:06:00 compute-0 systemd[1]: libpod-conmon-25c2b2e7f00c429b69b1e5c8bc9521e201d78ba3d9adaa06dcdd014ce57f1eda.scope: Deactivated successfully.
Feb 01 15:06:00 compute-0 podman[233002]: 2026-02-01 15:06:00.992159035 +0000 UTC m=+0.032101371 container create 4e75de009bc334a6a180fd90ca7304aedfa91d2fc115cbead2f5f0cc96ff33a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_cray, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:06:01 compute-0 systemd[1]: Started libpod-conmon-4e75de009bc334a6a180fd90ca7304aedfa91d2fc115cbead2f5f0cc96ff33a3.scope.
Feb 01 15:06:01 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d80461d8fdd1b30e494b0ec8d5c97d680d4fcb0420b6db635ae9759612a41c9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d80461d8fdd1b30e494b0ec8d5c97d680d4fcb0420b6db635ae9759612a41c9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d80461d8fdd1b30e494b0ec8d5c97d680d4fcb0420b6db635ae9759612a41c9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d80461d8fdd1b30e494b0ec8d5c97d680d4fcb0420b6db635ae9759612a41c9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:06:01 compute-0 podman[233002]: 2026-02-01 15:06:01.05478872 +0000 UTC m=+0.094731086 container init 4e75de009bc334a6a180fd90ca7304aedfa91d2fc115cbead2f5f0cc96ff33a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_cray, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:06:01 compute-0 podman[233002]: 2026-02-01 15:06:01.059610715 +0000 UTC m=+0.099553081 container start 4e75de009bc334a6a180fd90ca7304aedfa91d2fc115cbead2f5f0cc96ff33a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_cray, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 01 15:06:01 compute-0 podman[233002]: 2026-02-01 15:06:01.062950069 +0000 UTC m=+0.102892435 container attach 4e75de009bc334a6a180fd90ca7304aedfa91d2fc115cbead2f5f0cc96ff33a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_cray, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:06:01 compute-0 podman[233002]: 2026-02-01 15:06:00.97737342 +0000 UTC m=+0.017315776 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:06:01 compute-0 python3.9[233093]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:06:01 compute-0 quizzical_cray[233062]: {
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:     "0": [
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:         {
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "devices": [
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "/dev/loop3"
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             ],
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "lv_name": "ceph_lv0",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "lv_size": "21470642176",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "name": "ceph_lv0",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "tags": {
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.cluster_name": "ceph",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.crush_device_class": "",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.encrypted": "0",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.objectstore": "bluestore",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.osd_id": "0",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.type": "block",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.vdo": "0",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.with_tpm": "0"
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             },
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "type": "block",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "vg_name": "ceph_vg0"
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:         }
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:     ],
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:     "1": [
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:         {
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "devices": [
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "/dev/loop4"
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             ],
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "lv_name": "ceph_lv1",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "lv_size": "21470642176",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "name": "ceph_lv1",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "tags": {
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.cluster_name": "ceph",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.crush_device_class": "",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.encrypted": "0",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.objectstore": "bluestore",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.osd_id": "1",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.type": "block",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.vdo": "0",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.with_tpm": "0"
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             },
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "type": "block",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "vg_name": "ceph_vg1"
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:         }
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:     ],
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:     "2": [
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:         {
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "devices": [
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "/dev/loop5"
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             ],
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "lv_name": "ceph_lv2",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "lv_size": "21470642176",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "name": "ceph_lv2",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "tags": {
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.cluster_name": "ceph",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.crush_device_class": "",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.encrypted": "0",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.objectstore": "bluestore",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.osd_id": "2",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.type": "block",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.vdo": "0",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:                 "ceph.with_tpm": "0"
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             },
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "type": "block",
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:             "vg_name": "ceph_vg2"
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:         }
Feb 01 15:06:01 compute-0 quizzical_cray[233062]:     ]
Feb 01 15:06:01 compute-0 quizzical_cray[233062]: }
Feb 01 15:06:01 compute-0 systemd[1]: libpod-4e75de009bc334a6a180fd90ca7304aedfa91d2fc115cbead2f5f0cc96ff33a3.scope: Deactivated successfully.
Feb 01 15:06:01 compute-0 podman[233002]: 2026-02-01 15:06:01.298053719 +0000 UTC m=+0.337996085 container died 4e75de009bc334a6a180fd90ca7304aedfa91d2fc115cbead2f5f0cc96ff33a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 01 15:06:01 compute-0 ceph-mon[75179]: pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d80461d8fdd1b30e494b0ec8d5c97d680d4fcb0420b6db635ae9759612a41c9b-merged.mount: Deactivated successfully.
Feb 01 15:06:01 compute-0 podman[233002]: 2026-02-01 15:06:01.341153267 +0000 UTC m=+0.381095603 container remove 4e75de009bc334a6a180fd90ca7304aedfa91d2fc115cbead2f5f0cc96ff33a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_cray, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 01 15:06:01 compute-0 systemd[1]: libpod-conmon-4e75de009bc334a6a180fd90ca7304aedfa91d2fc115cbead2f5f0cc96ff33a3.scope: Deactivated successfully.
Feb 01 15:06:01 compute-0 sudo[232800]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:01 compute-0 sudo[233178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:06:01 compute-0 sudo[233178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:06:01 compute-0 sudo[233178]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:01 compute-0 sudo[233213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 15:06:01 compute-0 sudo[233213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:06:01 compute-0 podman[233319]: 2026-02-01 15:06:01.681936148 +0000 UTC m=+0.030186758 container create 5f2029bc59d931191183cc6100d3030ea14151c7268952cb5db6b8c75ff228c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 01 15:06:01 compute-0 python3.9[233307]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:06:01 compute-0 systemd[1]: Started libpod-conmon-5f2029bc59d931191183cc6100d3030ea14151c7268952cb5db6b8c75ff228c3.scope.
Feb 01 15:06:01 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:06:01 compute-0 podman[233319]: 2026-02-01 15:06:01.754968695 +0000 UTC m=+0.103219355 container init 5f2029bc59d931191183cc6100d3030ea14151c7268952cb5db6b8c75ff228c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackwell, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:06:01 compute-0 podman[233319]: 2026-02-01 15:06:01.760344705 +0000 UTC m=+0.108595315 container start 5f2029bc59d931191183cc6100d3030ea14151c7268952cb5db6b8c75ff228c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackwell, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 01 15:06:01 compute-0 podman[233319]: 2026-02-01 15:06:01.763520144 +0000 UTC m=+0.111770774 container attach 5f2029bc59d931191183cc6100d3030ea14151c7268952cb5db6b8c75ff228c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackwell, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:06:01 compute-0 frosty_blackwell[233333]: 167 167
Feb 01 15:06:01 compute-0 podman[233319]: 2026-02-01 15:06:01.668847141 +0000 UTC m=+0.017097771 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:06:01 compute-0 systemd[1]: libpod-5f2029bc59d931191183cc6100d3030ea14151c7268952cb5db6b8c75ff228c3.scope: Deactivated successfully.
Feb 01 15:06:01 compute-0 podman[233319]: 2026-02-01 15:06:01.767783714 +0000 UTC m=+0.116034364 container died 5f2029bc59d931191183cc6100d3030ea14151c7268952cb5db6b8c75ff228c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 01 15:06:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc94fdfde3c7a3fc467a6141ccf10a75793a3b02df435cbccc47b25bbf30da1e-merged.mount: Deactivated successfully.
Feb 01 15:06:01 compute-0 podman[233319]: 2026-02-01 15:06:01.813155176 +0000 UTC m=+0.161405786 container remove 5f2029bc59d931191183cc6100d3030ea14151c7268952cb5db6b8c75ff228c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackwell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:06:01 compute-0 systemd[1]: libpod-conmon-5f2029bc59d931191183cc6100d3030ea14151c7268952cb5db6b8c75ff228c3.scope: Deactivated successfully.
Feb 01 15:06:01 compute-0 podman[233427]: 2026-02-01 15:06:01.966767801 +0000 UTC m=+0.061191386 container create 162964e8a83b86c334a3b9e50b1b70900dca6d67e614606f58e413d6a33516b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 01 15:06:02 compute-0 systemd[1]: Started libpod-conmon-162964e8a83b86c334a3b9e50b1b70900dca6d67e614606f58e413d6a33516b1.scope.
Feb 01 15:06:02 compute-0 podman[233427]: 2026-02-01 15:06:01.942995645 +0000 UTC m=+0.037419330 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:06:02 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:06:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c54505dd67e240dfd07ce638dd4957e4c7aceb04b3840713fd149efcd60ac626/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:06:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c54505dd67e240dfd07ce638dd4957e4c7aceb04b3840713fd149efcd60ac626/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:06:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c54505dd67e240dfd07ce638dd4957e4c7aceb04b3840713fd149efcd60ac626/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:06:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c54505dd67e240dfd07ce638dd4957e4c7aceb04b3840713fd149efcd60ac626/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:06:02 compute-0 podman[233427]: 2026-02-01 15:06:02.059712346 +0000 UTC m=+0.154135971 container init 162964e8a83b86c334a3b9e50b1b70900dca6d67e614606f58e413d6a33516b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kepler, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 01 15:06:02 compute-0 podman[233427]: 2026-02-01 15:06:02.069540312 +0000 UTC m=+0.163963937 container start 162964e8a83b86c334a3b9e50b1b70900dca6d67e614606f58e413d6a33516b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kepler, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:06:02 compute-0 podman[233427]: 2026-02-01 15:06:02.072877565 +0000 UTC m=+0.167301180 container attach 162964e8a83b86c334a3b9e50b1b70900dca6d67e614606f58e413d6a33516b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kepler, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:06:02 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:02 compute-0 python3.9[233498]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769958361.3530967-986-6520375956519/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:06:02 compute-0 lvm[233722]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:06:02 compute-0 lvm[233722]: VG ceph_vg0 finished
Feb 01 15:06:02 compute-0 lvm[233725]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:06:02 compute-0 lvm[233725]: VG ceph_vg1 finished
Feb 01 15:06:02 compute-0 lvm[233727]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:06:02 compute-0 lvm[233727]: VG ceph_vg2 finished
Feb 01 15:06:02 compute-0 python3.9[233706]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:06:02 compute-0 determined_kepler[233485]: {}
Feb 01 15:06:02 compute-0 systemd[1]: libpod-162964e8a83b86c334a3b9e50b1b70900dca6d67e614606f58e413d6a33516b1.scope: Deactivated successfully.
Feb 01 15:06:02 compute-0 systemd[1]: libpod-162964e8a83b86c334a3b9e50b1b70900dca6d67e614606f58e413d6a33516b1.scope: Consumed 1.201s CPU time.
Feb 01 15:06:02 compute-0 podman[233427]: 2026-02-01 15:06:02.867067156 +0000 UTC m=+0.961490751 container died 162964e8a83b86c334a3b9e50b1b70900dca6d67e614606f58e413d6a33516b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kepler, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:06:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c54505dd67e240dfd07ce638dd4957e4c7aceb04b3840713fd149efcd60ac626-merged.mount: Deactivated successfully.
Feb 01 15:06:02 compute-0 podman[233427]: 2026-02-01 15:06:02.905205495 +0000 UTC m=+0.999629070 container remove 162964e8a83b86c334a3b9e50b1b70900dca6d67e614606f58e413d6a33516b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kepler, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 01 15:06:02 compute-0 systemd[1]: libpod-conmon-162964e8a83b86c334a3b9e50b1b70900dca6d67e614606f58e413d6a33516b1.scope: Deactivated successfully.
Feb 01 15:06:02 compute-0 sudo[233213]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:02 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:06:02 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:06:02 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:06:02 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:06:02 compute-0 sudo[233814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 15:06:02 compute-0 sudo[233814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:06:02 compute-0 sudo[233814]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:03 compute-0 python3.9[233889]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769958362.3567755-986-19488493019293/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:06:03 compute-0 ceph-mon[75179]: pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:03 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:06:03 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:06:03 compute-0 python3.9[234039]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:06:04 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:04 compute-0 python3.9[234160]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769958363.3419368-986-115254137798906/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:06:04 compute-0 python3.9[234310]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:06:05 compute-0 ceph-mon[75179]: pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:05 compute-0 python3.9[234431]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769958364.520509-986-242614210123598/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:06:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:06:05 compute-0 sudo[234581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycjscatnseadxwwvnxaggolbvuczakdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958365.5912852-1069-188738295721413/AnsiballZ_file.py'
Feb 01 15:06:05 compute-0 sudo[234581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:06:05 compute-0 python3.9[234583]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:06:06 compute-0 sudo[234581]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:06 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:06 compute-0 sudo[234733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrfaqjtloobaumgqhopgjulwoaaivcdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958366.177611-1077-115050716505555/AnsiballZ_copy.py'
Feb 01 15:06:06 compute-0 sudo[234733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:06:06 compute-0 python3.9[234735]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:06:06 compute-0 sudo[234733]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:07 compute-0 sudo[234885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkhfqlbuumotowfcvxwavbiihhskekrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958366.7839603-1085-281280526395576/AnsiballZ_stat.py'
Feb 01 15:06:07 compute-0 sudo[234885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:06:07 compute-0 python3.9[234887]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 15:06:07 compute-0 sudo[234885]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:07 compute-0 ceph-mon[75179]: pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:07 compute-0 sudo[235037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqtomljxpfpdqpsccwphhcxydtuontdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958367.4356313-1093-220117525042189/AnsiballZ_stat.py'
Feb 01 15:06:07 compute-0 sudo[235037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:06:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:06:07.799 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:06:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:06:07.800 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:06:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:06:07.801 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:06:07 compute-0 python3.9[235039]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:06:07 compute-0 sudo[235037]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:08 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:08 compute-0 sudo[235160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdfjdyxhivgvyenyuworsjxfojsgcayf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958367.4356313-1093-220117525042189/AnsiballZ_copy.py'
Feb 01 15:06:08 compute-0 sudo[235160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:06:08 compute-0 python3.9[235162]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769958367.4356313-1093-220117525042189/.source _original_basename=.t4spuymh follow=False checksum=390336b6fd37bd6abc6a51be59667203fda4a8f3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Feb 01 15:06:08 compute-0 sudo[235160]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:09 compute-0 python3.9[235314]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 15:06:09 compute-0 ceph-mon[75179]: pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:09 compute-0 python3.9[235466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:06:10 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:10 compute-0 python3.9[235587]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769958369.4889321-1119-267632565637640/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:06:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:06:11 compute-0 python3.9[235737]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 01 15:06:11 compute-0 ceph-mon[75179]: pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:11 compute-0 podman[235833]: 2026-02-01 15:06:11.397264211 +0000 UTC m=+0.064620362 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3)
Feb 01 15:06:11 compute-0 podman[235832]: 2026-02-01 15:06:11.40793309 +0000 UTC m=+0.075289361 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 01 15:06:11 compute-0 python3.9[235879]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769958370.6156359-1134-54541667157450/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 01 15:06:12 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:12 compute-0 sudo[236052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiqsiotvmmoocpclvlilehrbyywdqeyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958371.8860362-1151-182007720593123/AnsiballZ_container_config_data.py'
Feb 01 15:06:12 compute-0 sudo[236052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:06:12 compute-0 python3.9[236054]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Feb 01 15:06:12 compute-0 sudo[236052]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:13 compute-0 ceph-mon[75179]: pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:13 compute-0 sudo[236204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqeqwgqfispnlvzmibvlszbmqlotfdhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958372.8865535-1162-60469119855982/AnsiballZ_container_config_hash.py'
Feb 01 15:06:13 compute-0 sudo[236204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:06:13 compute-0 python3.9[236206]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 01 15:06:13 compute-0 sudo[236204]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:14 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:14 compute-0 sudo[236356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szxwyiimhapcaalsuhtbkwpjbncovnxt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769958373.884797-1172-154861958837470/AnsiballZ_edpm_container_manage.py'
Feb 01 15:06:14 compute-0 sudo[236356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:06:14 compute-0 python3[236358]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Feb 01 15:06:15 compute-0 ceph-mon[75179]: pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:06:16 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:16 compute-0 ceph-mon[75179]: pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:06:17
Feb 01 15:06:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:06:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:06:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes']
Feb 01 15:06:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:06:18 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:06:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:06:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:06:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:06:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:06:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:06:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:06:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:06:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:06:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:06:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:06:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:06:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:06:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:06:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:06:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:06:20 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:20 compute-0 ceph-mon[75179]: pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:06:21 compute-0 ceph-mon[75179]: pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:22 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:23 compute-0 ceph-mon[75179]: pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:24 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:24 compute-0 podman[236371]: 2026-02-01 15:06:24.232734457 +0000 UTC m=+9.617868961 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 01 15:06:24 compute-0 podman[236459]: 2026-02-01 15:06:24.379528751 +0000 UTC m=+0.050938459 container create 4188a32f613d62bd9297f1397158740f0f9a12b3a8be9b7582730b6369e140ac (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb 01 15:06:24 compute-0 podman[236459]: 2026-02-01 15:06:24.351950648 +0000 UTC m=+0.023360356 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 01 15:06:24 compute-0 python3[236358]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Feb 01 15:06:24 compute-0 sudo[236356]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:24 compute-0 ceph-mon[75179]: pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:24 compute-0 sudo[236647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-makqkwvadjroscjtojsnkczxwlifeqgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958384.685977-1180-165807998524667/AnsiballZ_stat.py'
Feb 01 15:06:24 compute-0 sudo[236647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:06:25 compute-0 python3.9[236649]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 15:06:25 compute-0 sudo[236647]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:25 compute-0 sudo[236801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pigkbiqzcmzixueohbnimgkqzlfydlrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958385.579757-1192-209238645478749/AnsiballZ_container_config_data.py'
Feb 01 15:06:25 compute-0 sudo[236801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:06:25 compute-0 python3.9[236803]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Feb 01 15:06:26 compute-0 sudo[236801]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:06:26 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:26 compute-0 sudo[236953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shatzjlgxitdabemqmibntfukwfzitvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958386.3245974-1203-109663680556289/AnsiballZ_container_config_hash.py'
Feb 01 15:06:26 compute-0 sudo[236953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:06:26 compute-0 python3.9[236955]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 01 15:06:26 compute-0 sudo[236953]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:27 compute-0 ceph-mon[75179]: pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:27 compute-0 sudo[237105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdcxdczxjapfrjtczaiixjxfdxgvddfs ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769958387.3756497-1213-136393553674982/AnsiballZ_edpm_container_manage.py'
Feb 01 15:06:27 compute-0 sudo[237105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:06:27 compute-0 python3[237107]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Feb 01 15:06:27 compute-0 podman[237145]: 2026-02-01 15:06:27.980152104 +0000 UTC m=+0.057547794 container create ddc42f08a16e43f37746c8a5e9a6d4ed6ded413b7350fc98de0b0af570620ca5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=edpm)
Feb 01 15:06:27 compute-0 podman[237145]: 2026-02-01 15:06:27.949652709 +0000 UTC m=+0.027048489 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 01 15:06:27 compute-0 python3[237107]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Feb 01 15:06:28 compute-0 sudo[237105]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:06:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:06:28 compute-0 sudo[237332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rybkeqkyzbgbnnegruoaaosfbdhfgcuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958388.2070396-1221-49755127110481/AnsiballZ_stat.py'
Feb 01 15:06:28 compute-0 sudo[237332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:06:28 compute-0 python3.9[237334]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 15:06:28 compute-0 sudo[237332]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:28 compute-0 sudo[237486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkupzllhaoobblgdixnmjlxgvxmzoolt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958388.7692027-1230-1534916913063/AnsiballZ_file.py'
Feb 01 15:06:28 compute-0 sudo[237486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:06:29 compute-0 python3.9[237488]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:06:29 compute-0 sudo[237486]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:29 compute-0 ceph-mon[75179]: pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:29 compute-0 sudo[237637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwjjffhhmonvicovmzpkfejroaidjqxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958389.2328014-1230-12339257402297/AnsiballZ_copy.py'
Feb 01 15:06:29 compute-0 sudo[237637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:06:29 compute-0 python3.9[237639]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769958389.2328014-1230-12339257402297/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 01 15:06:29 compute-0 sudo[237637]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:30 compute-0 sudo[237713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsbcfbglbcfilltreeqeqlxcvfrfoeff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958389.2328014-1230-12339257402297/AnsiballZ_systemd.py'
Feb 01 15:06:30 compute-0 sudo[237713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:06:30 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:30 compute-0 python3.9[237715]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 01 15:06:30 compute-0 systemd[1]: Reloading.
Feb 01 15:06:30 compute-0 systemd-rc-local-generator[237740]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:06:30 compute-0 systemd-sysv-generator[237743]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:06:30 compute-0 sudo[237713]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:06:31 compute-0 sudo[237825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpjhhhfcnlzipyvidtpgqfbiwdsrxbyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958389.2328014-1230-12339257402297/AnsiballZ_systemd.py'
Feb 01 15:06:31 compute-0 sudo[237825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:06:31 compute-0 ceph-mon[75179]: pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:31 compute-0 python3.9[237827]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 01 15:06:31 compute-0 systemd[1]: Reloading.
Feb 01 15:06:31 compute-0 systemd-sysv-generator[237857]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 01 15:06:31 compute-0 systemd-rc-local-generator[237853]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 01 15:06:31 compute-0 systemd[1]: Starting nova_compute container...
Feb 01 15:06:31 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:06:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10996b9645f27c5a3c4b71e9fc72813ca4b83b42ca11ead8fac15e1535eedb1f/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb 01 15:06:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10996b9645f27c5a3c4b71e9fc72813ca4b83b42ca11ead8fac15e1535eedb1f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb 01 15:06:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10996b9645f27c5a3c4b71e9fc72813ca4b83b42ca11ead8fac15e1535eedb1f/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 01 15:06:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10996b9645f27c5a3c4b71e9fc72813ca4b83b42ca11ead8fac15e1535eedb1f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb 01 15:06:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10996b9645f27c5a3c4b71e9fc72813ca4b83b42ca11ead8fac15e1535eedb1f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 01 15:06:31 compute-0 podman[237867]: 2026-02-01 15:06:31.752103779 +0000 UTC m=+0.091397383 container init ddc42f08a16e43f37746c8a5e9a6d4ed6ded413b7350fc98de0b0af570620ca5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 01 15:06:31 compute-0 podman[237867]: 2026-02-01 15:06:31.760540405 +0000 UTC m=+0.099833999 container start ddc42f08a16e43f37746c8a5e9a6d4ed6ded413b7350fc98de0b0af570620ca5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible)
Feb 01 15:06:31 compute-0 podman[237867]: nova_compute
Feb 01 15:06:31 compute-0 systemd[1]: Started nova_compute container.
Feb 01 15:06:31 compute-0 nova_compute[237882]: + sudo -E kolla_set_configs
Feb 01 15:06:31 compute-0 sudo[237825]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Validating config file
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Copying service configuration files
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Deleting /etc/ceph
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Creating directory /etc/ceph
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Setting permission for /etc/ceph
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Writing out command to execute
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 01 15:06:31 compute-0 nova_compute[237882]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 01 15:06:31 compute-0 nova_compute[237882]: ++ cat /run_command
Feb 01 15:06:31 compute-0 nova_compute[237882]: + CMD=nova-compute
Feb 01 15:06:31 compute-0 nova_compute[237882]: + ARGS=
Feb 01 15:06:31 compute-0 nova_compute[237882]: + sudo kolla_copy_cacerts
Feb 01 15:06:31 compute-0 nova_compute[237882]: + [[ ! -n '' ]]
Feb 01 15:06:31 compute-0 nova_compute[237882]: + . kolla_extend_start
Feb 01 15:06:31 compute-0 nova_compute[237882]: Running command: 'nova-compute'
Feb 01 15:06:31 compute-0 nova_compute[237882]: + echo 'Running command: '\''nova-compute'\'''
Feb 01 15:06:31 compute-0 nova_compute[237882]: + umask 0022
Feb 01 15:06:31 compute-0 nova_compute[237882]: + exec nova-compute
Feb 01 15:06:32 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:32 compute-0 python3.9[238043]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 15:06:33 compute-0 ceph-mon[75179]: pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:33 compute-0 python3.9[238194]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 15:06:33 compute-0 python3.9[238344]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 01 15:06:34 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:34 compute-0 nova_compute[237882]: 2026-02-01 15:06:34.143 237886 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 01 15:06:34 compute-0 nova_compute[237882]: 2026-02-01 15:06:34.143 237886 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 01 15:06:34 compute-0 nova_compute[237882]: 2026-02-01 15:06:34.143 237886 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 01 15:06:34 compute-0 nova_compute[237882]: 2026-02-01 15:06:34.144 237886 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Feb 01 15:06:34 compute-0 nova_compute[237882]: 2026-02-01 15:06:34.283 237886 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:06:34 compute-0 nova_compute[237882]: 2026-02-01 15:06:34.301 237886 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:06:34 compute-0 nova_compute[237882]: 2026-02-01 15:06:34.301 237886 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Feb 01 15:06:34 compute-0 sudo[238498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtkmhizkltupdgvbqqewchnexjskmodh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958393.9864247-1290-220332794651045/AnsiballZ_podman_container.py'
Feb 01 15:06:34 compute-0 sudo[238498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:06:34 compute-0 python3.9[238500]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb 01 15:06:34 compute-0 sudo[238498]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:34 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 01 15:06:34 compute-0 nova_compute[237882]: 2026-02-01 15:06:34.875 237886 INFO nova.virt.driver [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.083 237886 INFO nova.compute.provider_config [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Feb 01 15:06:35 compute-0 ceph-mon[75179]: pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.210 237886 DEBUG oslo_concurrency.lockutils [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.211 237886 DEBUG oslo_concurrency.lockutils [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.211 237886 DEBUG oslo_concurrency.lockutils [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.212 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.212 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.213 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.213 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.213 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.214 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.214 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.214 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.215 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.215 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.216 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.216 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.217 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.217 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.217 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.218 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.218 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.218 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.219 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.219 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.219 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.220 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.220 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.220 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.221 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.221 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.221 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.222 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.222 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.222 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.223 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.223 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.223 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.224 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.224 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.225 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.225 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.225 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.226 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.226 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.226 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.227 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.227 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.228 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.228 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.229 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.229 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.230 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.230 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.230 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.231 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.231 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.231 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.232 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.232 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.232 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.233 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.233 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.233 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.234 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.234 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.234 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.235 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.235 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.235 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.236 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.236 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.236 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.237 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.237 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.238 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.238 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.238 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.239 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.239 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.239 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.240 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.240 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.240 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.241 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.241 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.242 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.242 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.242 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.242 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.243 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.243 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.243 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.244 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.244 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.245 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.245 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.245 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.245 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.246 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.246 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.246 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.247 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.247 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.247 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.248 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.248 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.248 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.249 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.249 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.249 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.250 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.250 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.250 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.251 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.251 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.251 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.252 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.252 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.252 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.253 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.253 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.253 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.254 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.254 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.254 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.255 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.255 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.255 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.256 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.256 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.256 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.256 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.257 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.257 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.257 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.257 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.257 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.258 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.258 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.258 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.258 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.258 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.259 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.259 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.259 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.259 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.259 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.260 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.260 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.260 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.260 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.260 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.261 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.261 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.261 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.261 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.261 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.262 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.262 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.262 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.262 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.262 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.263 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.263 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.263 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.263 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.263 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.264 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.264 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.264 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.264 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.264 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.265 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.265 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.265 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.265 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.266 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.266 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.266 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.266 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.267 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.267 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.267 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.268 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.268 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.268 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.269 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.269 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.269 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.269 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.270 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.270 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.270 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.270 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.271 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.271 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.271 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.271 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.271 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.272 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.272 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.272 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.272 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.272 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.273 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.273 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.273 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.273 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.273 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.274 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.274 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.274 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.274 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.275 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.275 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.275 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.275 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.275 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.276 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.276 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.276 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.277 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.277 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.277 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.277 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.277 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.278 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.278 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.278 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.278 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.278 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.279 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.279 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.279 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.279 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.279 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.280 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.280 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.280 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.280 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.280 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.281 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.281 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.281 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.281 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.281 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.282 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.282 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.282 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.282 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.282 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.283 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.283 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.283 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.283 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.284 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.284 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.284 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.284 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.285 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.285 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.285 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.285 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.286 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.286 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.286 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.286 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.286 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.287 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.287 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.287 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.287 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.287 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.288 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.288 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.288 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.288 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.288 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.288 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.289 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.289 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.289 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.289 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.289 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.289 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.290 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.290 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.290 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.290 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.290 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.290 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.291 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.291 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.291 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.291 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.291 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.291 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.291 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.292 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.292 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.292 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.292 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.292 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.293 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.293 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.293 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.293 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.293 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.293 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.294 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.294 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.294 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.294 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.294 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.294 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.294 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.295 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.295 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.295 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.295 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.295 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.295 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.296 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.296 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.296 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.296 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.296 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.296 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.296 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.297 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.297 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.297 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.297 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.297 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.297 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.297 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.298 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.298 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.298 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.298 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.298 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.298 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.298 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.299 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.299 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.299 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.299 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.299 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.299 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.299 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.300 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.300 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.300 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.300 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.300 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.300 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.300 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.301 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.301 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.301 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.301 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.301 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.302 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.302 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.302 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.302 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.302 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.302 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.302 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.303 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.303 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.303 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.303 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.303 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.304 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.304 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.304 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.304 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.304 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.304 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.304 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.305 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.305 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.305 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.305 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.305 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.305 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.306 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.306 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.306 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.306 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.306 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.306 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.306 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.307 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.307 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.307 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.307 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.307 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.307 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.307 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.308 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.308 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.308 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.308 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.308 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.308 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.309 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.309 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.309 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.309 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.309 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.309 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.309 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.310 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.310 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.310 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.310 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.310 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.310 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.310 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.311 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.311 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.311 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.311 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.311 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.312 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.312 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.312 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.312 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.312 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.312 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.313 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.313 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.313 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.313 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.313 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.313 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.313 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.314 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.314 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.314 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.314 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.314 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.314 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.314 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.314 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.315 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.315 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.315 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.315 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.315 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.315 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.316 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.316 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.316 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.316 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.316 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.316 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.316 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.317 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.317 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.317 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.317 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.317 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.317 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.317 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.318 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.318 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.318 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.318 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.318 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.318 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.318 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.319 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.319 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.319 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.319 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.319 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.319 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.319 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.320 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.320 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.320 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.320 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.320 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.320 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.320 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.321 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.321 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.321 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.321 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.321 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.321 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.321 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.322 237886 WARNING oslo_config.cfg [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb 01 15:06:35 compute-0 nova_compute[237882]: live_migration_uri is deprecated for removal in favor of two other options that
Feb 01 15:06:35 compute-0 nova_compute[237882]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb 01 15:06:35 compute-0 nova_compute[237882]: and ``live_migration_inbound_addr`` respectively.
Feb 01 15:06:35 compute-0 nova_compute[237882]: ).  Its value may be silently ignored in the future.
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.322 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.322 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.322 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.322 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.322 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.323 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.323 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.323 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.323 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.323 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.323 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.324 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.324 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.324 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.324 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.324 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.324 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.324 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.325 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.rbd_secret_uuid        = 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.325 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.325 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.325 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.325 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.325 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.325 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.326 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.326 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.326 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.326 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.326 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.326 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.326 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.327 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.327 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.327 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.327 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.327 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.327 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.328 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.328 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.328 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.328 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.328 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.328 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.328 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.329 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.329 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.329 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.329 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.329 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.329 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.329 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.330 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.330 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.330 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.330 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.330 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.330 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.330 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.331 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.331 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.331 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.331 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.331 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.331 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.331 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.332 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.332 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.332 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.332 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.332 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.332 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.332 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.333 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.333 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.333 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.333 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.333 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.333 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.333 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.334 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.334 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.334 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.334 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.334 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.334 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.334 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.335 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.335 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.335 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.335 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.335 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.335 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.335 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 sudo[238672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdfczdfsoibwaixfctnxlojhpoeghszd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958395.0327885-1298-95319789009688/AnsiballZ_systemd.py'
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.336 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.336 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.336 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.336 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.336 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.336 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.336 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.337 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.337 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.337 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.337 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.337 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.337 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.337 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.338 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.338 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.338 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.338 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.338 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.338 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.338 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.339 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.339 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.339 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.339 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.339 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.339 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.339 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.340 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.340 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.340 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.340 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.340 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.340 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.340 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 sudo[238672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.341 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.341 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.341 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.341 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.341 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.341 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.342 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.342 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.342 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.342 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.342 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.342 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.343 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.343 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.343 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.343 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.343 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.343 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.343 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.343 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.344 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.344 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.344 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.344 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.344 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.344 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.345 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.345 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.345 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.345 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.345 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.345 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.345 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.346 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.346 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.346 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.346 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.346 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.346 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.346 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.347 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.347 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.347 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.347 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.347 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.347 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.347 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.348 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.348 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.348 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.348 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.348 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.348 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.349 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.349 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.349 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.349 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.349 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.349 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.349 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.350 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.350 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.350 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.350 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.350 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.350 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.350 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.351 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.351 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.351 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.351 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.351 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.351 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.351 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.352 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.352 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.352 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.352 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.352 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.352 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.352 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.353 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.353 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.353 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.353 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.353 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.353 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.353 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.354 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.354 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.354 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.354 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.354 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.354 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.354 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.355 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.355 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.355 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.355 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.355 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.355 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.355 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.356 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.356 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.356 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.356 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.356 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.356 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.356 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.357 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.357 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.357 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.357 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.357 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.357 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.357 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.358 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.358 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.358 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.358 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.358 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.358 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.359 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.359 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.359 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.359 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.359 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.359 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.359 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.360 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.360 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.360 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.360 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.360 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.360 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.361 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.361 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.361 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.361 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.361 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.361 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.361 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.362 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.362 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.362 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.362 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.362 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.362 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.362 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.363 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.363 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.363 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.363 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.363 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.363 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.364 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.364 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.364 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.364 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.364 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.364 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.364 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.365 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.365 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.365 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.365 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.365 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.365 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.366 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.366 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.366 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.366 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.366 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.366 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.367 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.367 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.367 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.367 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.367 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.368 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.368 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.368 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.368 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.368 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.369 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.369 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.369 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.369 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.369 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.370 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.370 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.370 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.370 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.370 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.370 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.371 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.371 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.371 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.371 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.371 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.372 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.372 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.372 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.372 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.372 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.373 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.373 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.373 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.373 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.373 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.374 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.374 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.374 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.374 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.374 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.375 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.375 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.375 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.375 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.375 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.375 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.376 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.376 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.376 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.376 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.376 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.377 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.377 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.377 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.377 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.377 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.377 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.378 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.378 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.378 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.378 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.378 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.378 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.379 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.379 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.379 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.379 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.379 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.380 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.380 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.380 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.380 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.380 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.381 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.381 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.381 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.381 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.381 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.381 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.382 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.382 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.382 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.382 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.382 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.383 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.383 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.383 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.383 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.383 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.383 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.384 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.384 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.384 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.384 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.385 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.385 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.385 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.385 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.385 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.385 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.386 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.386 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.386 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.386 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.386 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.387 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.387 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.387 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.387 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.387 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.387 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.388 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.388 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.388 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.388 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.388 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.389 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.389 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.389 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.389 237886 DEBUG oslo_service.service [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.390 237886 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.406 237886 DEBUG nova.virt.libvirt.host [None req-790b6d31-ca37-4297-9043-daaf51cbbefd - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.407 237886 DEBUG nova.virt.libvirt.host [None req-790b6d31-ca37-4297-9043-daaf51cbbefd - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.407 237886 DEBUG nova.virt.libvirt.host [None req-790b6d31-ca37-4297-9043-daaf51cbbefd - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.407 237886 DEBUG nova.virt.libvirt.host [None req-790b6d31-ca37-4297-9043-daaf51cbbefd - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Feb 01 15:06:35 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Feb 01 15:06:35 compute-0 systemd[1]: Started libvirt QEMU daemon.
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.485 237886 DEBUG nova.virt.libvirt.host [None req-790b6d31-ca37-4297-9043-daaf51cbbefd - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f4761a2b3d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.489 237886 DEBUG nova.virt.libvirt.host [None req-790b6d31-ca37-4297-9043-daaf51cbbefd - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f4761a2b3d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.490 237886 INFO nova.virt.libvirt.driver [None req-790b6d31-ca37-4297-9043-daaf51cbbefd - - - - - -] Connection event '1' reason 'None'
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.518 237886 WARNING nova.virt.libvirt.driver [None req-790b6d31-ca37-4297-9043-daaf51cbbefd - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.519 237886 DEBUG nova.virt.libvirt.volume.mount [None req-790b6d31-ca37-4297-9043-daaf51cbbefd - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Feb 01 15:06:35 compute-0 python3.9[238674]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 01 15:06:35 compute-0 systemd[1]: Stopping nova_compute container...
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.712 237886 DEBUG oslo_concurrency.lockutils [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.712 237886 DEBUG oslo_concurrency.lockutils [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 01 15:06:35 compute-0 nova_compute[237882]: 2026-02-01 15:06:35.713 237886 DEBUG oslo_concurrency.lockutils [None req-1a67a202-e6b3-4a69-975e-35be8ed645c5 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 01 15:06:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:06:36 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:36 compute-0 virtqemud[238696]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Feb 01 15:06:36 compute-0 virtqemud[238696]: hostname: compute-0
Feb 01 15:06:36 compute-0 virtqemud[238696]: End of file while reading data: Input/output error
Feb 01 15:06:36 compute-0 systemd[1]: libpod-ddc42f08a16e43f37746c8a5e9a6d4ed6ded413b7350fc98de0b0af570620ca5.scope: Deactivated successfully.
Feb 01 15:06:36 compute-0 systemd[1]: libpod-ddc42f08a16e43f37746c8a5e9a6d4ed6ded413b7350fc98de0b0af570620ca5.scope: Consumed 2.854s CPU time.
Feb 01 15:06:36 compute-0 podman[238730]: 2026-02-01 15:06:36.703531003 +0000 UTC m=+1.026791681 container died ddc42f08a16e43f37746c8a5e9a6d4ed6ded413b7350fc98de0b0af570620ca5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 01 15:06:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ddc42f08a16e43f37746c8a5e9a6d4ed6ded413b7350fc98de0b0af570620ca5-userdata-shm.mount: Deactivated successfully.
Feb 01 15:06:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-10996b9645f27c5a3c4b71e9fc72813ca4b83b42ca11ead8fac15e1535eedb1f-merged.mount: Deactivated successfully.
Feb 01 15:06:37 compute-0 ceph-mon[75179]: pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:37 compute-0 podman[238730]: 2026-02-01 15:06:37.766223619 +0000 UTC m=+2.089484277 container cleanup ddc42f08a16e43f37746c8a5e9a6d4ed6ded413b7350fc98de0b0af570620ca5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:06:37 compute-0 podman[238730]: nova_compute
Feb 01 15:06:37 compute-0 podman[238765]: nova_compute
Feb 01 15:06:37 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Feb 01 15:06:37 compute-0 systemd[1]: Stopped nova_compute container.
Feb 01 15:06:37 compute-0 systemd[1]: Starting nova_compute container...
Feb 01 15:06:37 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:06:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10996b9645f27c5a3c4b71e9fc72813ca4b83b42ca11ead8fac15e1535eedb1f/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb 01 15:06:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10996b9645f27c5a3c4b71e9fc72813ca4b83b42ca11ead8fac15e1535eedb1f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb 01 15:06:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10996b9645f27c5a3c4b71e9fc72813ca4b83b42ca11ead8fac15e1535eedb1f/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 01 15:06:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10996b9645f27c5a3c4b71e9fc72813ca4b83b42ca11ead8fac15e1535eedb1f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb 01 15:06:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10996b9645f27c5a3c4b71e9fc72813ca4b83b42ca11ead8fac15e1535eedb1f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 01 15:06:37 compute-0 podman[238778]: 2026-02-01 15:06:37.986893894 +0000 UTC m=+0.112342270 container init ddc42f08a16e43f37746c8a5e9a6d4ed6ded413b7350fc98de0b0af570620ca5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute)
Feb 01 15:06:37 compute-0 podman[238778]: 2026-02-01 15:06:37.9949756 +0000 UTC m=+0.120423936 container start ddc42f08a16e43f37746c8a5e9a6d4ed6ded413b7350fc98de0b0af570620ca5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 01 15:06:37 compute-0 podman[238778]: nova_compute
Feb 01 15:06:38 compute-0 systemd[1]: Started nova_compute container.
Feb 01 15:06:38 compute-0 nova_compute[238794]: + sudo -E kolla_set_configs
Feb 01 15:06:38 compute-0 sudo[238672]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Validating config file
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Copying service configuration files
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Deleting /etc/ceph
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Creating directory /etc/ceph
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Setting permission for /etc/ceph
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Writing out command to execute
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 01 15:06:38 compute-0 nova_compute[238794]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 01 15:06:38 compute-0 nova_compute[238794]: ++ cat /run_command
Feb 01 15:06:38 compute-0 nova_compute[238794]: + CMD=nova-compute
Feb 01 15:06:38 compute-0 nova_compute[238794]: + ARGS=
Feb 01 15:06:38 compute-0 nova_compute[238794]: + sudo kolla_copy_cacerts
Feb 01 15:06:38 compute-0 nova_compute[238794]: + [[ ! -n '' ]]
Feb 01 15:06:38 compute-0 nova_compute[238794]: + . kolla_extend_start
Feb 01 15:06:38 compute-0 nova_compute[238794]: Running command: 'nova-compute'
Feb 01 15:06:38 compute-0 nova_compute[238794]: + echo 'Running command: '\''nova-compute'\'''
Feb 01 15:06:38 compute-0 nova_compute[238794]: + umask 0022
Feb 01 15:06:38 compute-0 nova_compute[238794]: + exec nova-compute
Feb 01 15:06:38 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:38 compute-0 sudo[238955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewyfxusdtgmsmdnxnmztoaxqgdslizdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769958398.2693214-1307-118540712048721/AnsiballZ_podman_container.py'
Feb 01 15:06:38 compute-0 sudo[238955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:06:38 compute-0 ceph-mon[75179]: pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:38 compute-0 python3.9[238957]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb 01 15:06:38 compute-0 systemd[1]: Started libpod-conmon-4188a32f613d62bd9297f1397158740f0f9a12b3a8be9b7582730b6369e140ac.scope.
Feb 01 15:06:38 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:06:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee8b5b79cd15fa1ebd9285807549a17e5193bbb347b38d8b3e15df23e9d4932/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Feb 01 15:06:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee8b5b79cd15fa1ebd9285807549a17e5193bbb347b38d8b3e15df23e9d4932/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 01 15:06:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee8b5b79cd15fa1ebd9285807549a17e5193bbb347b38d8b3e15df23e9d4932/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Feb 01 15:06:39 compute-0 podman[238982]: 2026-02-01 15:06:39.010760892 +0000 UTC m=+0.145886060 container init 4188a32f613d62bd9297f1397158740f0f9a12b3a8be9b7582730b6369e140ac (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:06:39 compute-0 podman[238982]: 2026-02-01 15:06:39.020352901 +0000 UTC m=+0.155478039 container start 4188a32f613d62bd9297f1397158740f0f9a12b3a8be9b7582730b6369e140ac (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 01 15:06:39 compute-0 python3.9[238957]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Feb 01 15:06:39 compute-0 nova_compute_init[239004]: INFO:nova_statedir:Applying nova statedir ownership
Feb 01 15:06:39 compute-0 nova_compute_init[239004]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Feb 01 15:06:39 compute-0 nova_compute_init[239004]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Feb 01 15:06:39 compute-0 nova_compute_init[239004]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Feb 01 15:06:39 compute-0 nova_compute_init[239004]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Feb 01 15:06:39 compute-0 nova_compute_init[239004]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Feb 01 15:06:39 compute-0 nova_compute_init[239004]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Feb 01 15:06:39 compute-0 nova_compute_init[239004]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Feb 01 15:06:39 compute-0 nova_compute_init[239004]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Feb 01 15:06:39 compute-0 nova_compute_init[239004]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Feb 01 15:06:39 compute-0 nova_compute_init[239004]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Feb 01 15:06:39 compute-0 nova_compute_init[239004]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Feb 01 15:06:39 compute-0 nova_compute_init[239004]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Feb 01 15:06:39 compute-0 nova_compute_init[239004]: INFO:nova_statedir:Nova statedir ownership complete
Feb 01 15:06:39 compute-0 systemd[1]: libpod-4188a32f613d62bd9297f1397158740f0f9a12b3a8be9b7582730b6369e140ac.scope: Deactivated successfully.
Feb 01 15:06:39 compute-0 podman[239005]: 2026-02-01 15:06:39.086474944 +0000 UTC m=+0.038897161 container died 4188a32f613d62bd9297f1397158740f0f9a12b3a8be9b7582730b6369e140ac (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=edpm)
Feb 01 15:06:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4188a32f613d62bd9297f1397158740f0f9a12b3a8be9b7582730b6369e140ac-userdata-shm.mount: Deactivated successfully.
Feb 01 15:06:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ee8b5b79cd15fa1ebd9285807549a17e5193bbb347b38d8b3e15df23e9d4932-merged.mount: Deactivated successfully.
Feb 01 15:06:39 compute-0 podman[239015]: 2026-02-01 15:06:39.13805056 +0000 UTC m=+0.050248489 container cleanup 4188a32f613d62bd9297f1397158740f0f9a12b3a8be9b7582730b6369e140ac (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 01 15:06:39 compute-0 systemd[1]: libpod-conmon-4188a32f613d62bd9297f1397158740f0f9a12b3a8be9b7582730b6369e140ac.scope: Deactivated successfully.
Feb 01 15:06:39 compute-0 sudo[238955]: pam_unix(sudo:session): session closed for user root
Feb 01 15:06:39 compute-0 sshd-session[214474]: Connection closed by 192.168.122.30 port 60612
Feb 01 15:06:39 compute-0 sshd-session[214471]: pam_unix(sshd:session): session closed for user zuul
Feb 01 15:06:39 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Feb 01 15:06:39 compute-0 systemd[1]: session-49.scope: Consumed 1min 43.046s CPU time.
Feb 01 15:06:39 compute-0 systemd-logind[786]: Session 49 logged out. Waiting for processes to exit.
Feb 01 15:06:39 compute-0 systemd-logind[786]: Removed session 49.
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.085 238798 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.086 238798 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.086 238798 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.086 238798 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Feb 01 15:06:40 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.284 238798 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.310 238798 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.311 238798 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.816 238798 INFO nova.virt.driver [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.954 238798 INFO nova.compute.provider_config [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.978 238798 DEBUG oslo_concurrency.lockutils [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.978 238798 DEBUG oslo_concurrency.lockutils [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.979 238798 DEBUG oslo_concurrency.lockutils [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.979 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.980 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.980 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.980 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.980 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.980 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.980 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.981 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.981 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.981 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.981 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.981 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.981 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.982 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.982 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.982 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.982 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.982 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.982 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.982 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.983 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.983 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.983 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.983 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.983 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.983 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.984 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.984 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.984 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.984 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.984 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.984 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.985 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.985 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.985 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.985 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.985 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.985 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.985 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.986 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.986 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.986 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.986 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.986 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.986 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.987 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.987 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.987 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.987 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.987 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.987 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.988 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.988 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.988 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.988 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.988 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.988 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.988 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.989 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.989 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.989 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.989 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.989 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.989 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.989 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.990 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.990 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.990 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.990 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.990 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.990 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.990 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.991 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.991 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.991 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.991 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.991 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.991 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.991 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.992 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.992 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.992 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.992 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.992 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.992 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.993 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.993 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.993 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.993 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.993 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.993 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.993 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.994 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.994 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.994 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.994 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.994 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.994 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.994 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.995 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.995 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.995 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.995 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.995 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.995 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.995 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.996 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.996 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.996 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.996 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.996 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.996 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.996 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.997 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.997 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.997 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.997 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.997 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.997 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.997 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.998 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.998 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.998 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.998 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.998 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.998 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.999 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.999 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.999 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.999 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:40 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.999 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:40.999 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.000 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.000 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.000 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.000 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.000 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.000 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.001 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.001 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.001 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.001 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.001 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.001 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.001 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.001 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.002 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.002 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.002 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.002 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.002 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.002 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.003 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.003 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.003 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.003 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.003 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.003 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.004 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.004 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.004 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.004 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.004 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.004 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.004 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.005 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.005 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.005 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.005 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.005 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.005 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.005 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.006 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.006 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.006 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.006 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.006 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.006 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.007 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.007 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.007 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.007 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.007 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.007 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.008 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.008 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.008 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.008 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.008 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.008 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.008 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.009 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.009 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.009 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.009 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.009 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.009 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.009 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.010 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.010 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.010 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.010 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.010 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.010 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.011 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.011 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.011 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.011 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.011 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.011 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.011 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.011 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.012 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.012 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.012 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.012 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.012 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.012 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.012 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.013 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.013 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.013 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.013 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.013 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.014 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.014 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.014 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.014 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.014 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.014 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.015 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.015 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.015 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.015 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.015 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.015 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.016 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.016 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.016 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.016 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.016 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.017 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.017 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.017 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.017 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.017 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.017 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.018 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.018 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.018 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.018 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.018 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.019 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.019 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.019 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.019 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.019 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.019 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.020 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.020 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.020 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.020 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.020 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.021 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.021 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.021 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.021 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.021 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.021 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.022 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.022 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.022 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.022 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.022 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.022 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.022 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.023 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.023 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.023 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.023 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.023 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.023 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.023 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.024 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.024 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.024 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.024 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.024 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.024 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.024 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.025 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.025 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.025 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.025 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.025 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.025 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.025 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.026 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.026 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.026 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.026 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.026 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.026 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.027 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.027 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.027 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.027 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.027 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.027 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.027 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.028 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.028 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.028 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.028 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.028 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.028 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.028 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.029 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.029 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.029 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.029 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.029 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.029 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.029 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.030 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.030 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.030 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.030 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.030 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.030 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.030 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.031 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.031 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.031 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.031 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.031 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.031 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.031 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.032 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.032 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.032 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.032 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.032 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.032 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.032 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.033 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.033 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.033 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.033 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.033 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.033 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.033 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.033 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.034 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.034 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.034 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.035 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.035 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.035 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.035 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.035 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.035 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.036 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.036 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.036 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.036 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.036 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.036 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.037 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.037 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.037 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.037 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.037 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.038 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.038 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.038 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.038 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.038 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.038 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.039 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.039 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.039 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.039 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.039 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.039 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.040 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.040 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.040 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.040 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.040 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.040 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.041 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.041 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.041 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.041 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.041 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.041 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.042 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.042 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.042 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.042 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.042 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.042 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.042 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.043 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.043 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.043 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.043 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.043 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.043 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.044 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.044 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.044 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.044 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.044 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.044 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.044 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.044 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.045 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.045 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.045 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.045 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.045 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.045 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.046 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.046 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.046 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.046 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.046 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.046 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.046 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.047 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.047 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.047 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.047 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.047 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.047 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.048 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.048 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.048 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.048 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.048 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.048 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.048 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.049 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.049 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.049 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.049 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.049 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.049 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.049 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.050 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.050 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.050 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.050 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.050 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.050 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.051 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.051 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.051 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.051 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.051 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.051 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.051 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.052 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.052 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.052 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.052 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.052 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.052 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.053 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.053 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.053 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.053 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.053 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.053 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.054 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.054 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.054 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.054 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.054 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.054 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.054 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.055 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.055 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.055 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.055 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.055 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.055 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.055 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.056 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.056 238798 WARNING oslo_config.cfg [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb 01 15:06:41 compute-0 nova_compute[238794]: live_migration_uri is deprecated for removal in favor of two other options that
Feb 01 15:06:41 compute-0 nova_compute[238794]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb 01 15:06:41 compute-0 nova_compute[238794]: and ``live_migration_inbound_addr`` respectively.
Feb 01 15:06:41 compute-0 nova_compute[238794]: ).  Its value may be silently ignored in the future.
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.056 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.056 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.057 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.057 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.057 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.057 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.057 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.057 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.058 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.058 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.058 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.058 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.058 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.058 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.059 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.059 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.059 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.059 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.059 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.rbd_secret_uuid        = 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.059 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.059 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.060 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.060 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.060 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.060 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.060 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.060 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.060 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.061 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.061 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.061 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.061 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.061 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.061 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.061 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.062 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.062 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.062 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.062 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.062 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.062 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.062 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.063 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.063 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.063 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.063 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.063 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.063 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.064 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.064 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.064 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.064 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.064 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.064 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.064 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.065 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.065 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.065 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.065 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.065 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.065 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.065 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.066 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.066 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.066 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.066 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.066 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.066 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.066 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.066 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.067 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.067 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.067 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.067 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.067 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.067 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.067 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.068 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.068 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.068 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.068 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.068 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.068 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.068 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.069 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.069 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.069 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.069 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.069 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.069 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.070 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.070 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.070 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.070 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.070 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.070 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.070 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.071 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.071 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.071 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.071 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.071 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.071 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.071 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.072 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.072 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.072 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.072 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.072 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.072 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.073 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.073 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.073 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.073 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.073 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.073 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.074 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.074 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.074 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.074 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.074 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.074 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.074 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.075 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.075 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.075 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.075 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.075 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.075 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.076 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.076 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.076 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.076 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.076 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.076 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.076 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.077 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.077 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.077 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.077 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.077 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.078 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.078 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.078 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.078 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.078 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.078 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.078 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.079 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.079 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.079 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.079 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.079 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.079 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.079 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.080 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.080 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.080 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.080 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.080 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.080 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.080 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.081 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.081 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.081 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.081 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.081 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.081 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.081 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.082 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.082 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.082 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.082 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.082 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.082 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.082 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.083 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.083 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.083 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.083 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.083 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.083 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.084 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.084 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.084 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.084 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.084 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.084 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.084 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.085 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.085 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.085 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.085 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.085 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.085 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.085 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.085 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.086 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.086 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.086 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.086 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.086 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.086 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.087 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.087 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.087 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.087 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.087 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.087 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.087 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.088 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.088 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.088 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.088 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.088 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.088 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.088 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.089 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.089 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.089 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.089 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.089 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.089 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.089 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.090 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.090 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.090 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.090 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.090 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.090 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.090 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.090 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.091 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.091 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.091 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.091 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.091 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.091 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.092 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.092 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.092 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.092 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.092 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.092 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.092 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.092 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.093 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.093 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.093 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.093 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.093 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.093 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.094 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.094 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.094 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.094 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.094 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.094 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.095 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.095 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.095 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.095 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.095 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.095 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.095 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.096 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.096 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.096 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.096 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.096 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.096 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.096 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.096 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.097 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.097 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.097 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.097 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.097 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.097 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.097 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.098 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.098 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.098 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.098 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.098 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.098 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.098 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.099 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.099 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.099 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.099 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.099 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.099 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.099 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.100 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.100 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.100 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.100 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.100 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.100 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.101 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.101 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.101 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.101 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.101 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.101 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.101 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.102 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.102 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.102 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.102 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.102 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.102 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.102 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.103 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.103 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.103 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.103 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.103 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.103 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.103 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.104 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.104 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.104 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.104 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.104 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.104 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.104 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.104 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.105 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.105 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.105 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.105 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.105 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.105 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.105 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.106 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.106 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.106 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.106 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.106 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.106 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.106 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.107 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.107 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.107 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.107 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.107 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.107 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.107 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.108 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.108 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.108 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.108 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.108 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.108 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.108 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.109 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.109 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.109 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.109 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.109 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.109 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.109 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.110 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.110 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.110 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.110 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.110 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.110 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.110 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.111 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.111 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.111 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.111 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.111 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.111 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.111 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.112 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.112 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.112 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.112 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.112 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.112 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.112 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.113 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.113 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.113 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.113 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.113 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.113 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.113 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.114 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.114 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.114 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.114 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.114 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.114 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.114 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.115 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.115 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.115 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.115 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.115 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.115 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.115 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.116 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.116 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.116 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.116 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.116 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.116 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.116 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.117 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.117 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.117 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.117 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.117 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.117 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.117 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.118 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.118 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.118 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.118 238798 DEBUG oslo_service.service [None req-e917d05f-8f26-4fc8-a326-322499d82b00 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.119 238798 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.145 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.146 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.146 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.146 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.161 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f755fa4d250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.164 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f755fa4d250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Feb 01 15:06:41 compute-0 ceph-mon[75179]: pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.164 238798 INFO nova.virt.libvirt.driver [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Connection event '1' reason 'None'
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.170 238798 INFO nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Libvirt host capabilities <capabilities>
Feb 01 15:06:41 compute-0 nova_compute[238794]: 
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <host>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <uuid>072bb88e-d455-426c-a850-83903b041dc8</uuid>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <cpu>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <arch>x86_64</arch>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model>EPYC-Rome-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <vendor>AMD</vendor>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <microcode version='16777317'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <signature family='23' model='49' stepping='0'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <maxphysaddr mode='emulate' bits='40'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature name='x2apic'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature name='tsc-deadline'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature name='osxsave'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature name='hypervisor'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature name='tsc_adjust'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature name='spec-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature name='stibp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature name='arch-capabilities'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature name='ssbd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature name='cmp_legacy'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature name='topoext'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature name='virt-ssbd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature name='lbrv'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature name='tsc-scale'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature name='vmcb-clean'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature name='pause-filter'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature name='pfthreshold'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature name='svme-addr-chk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature name='rdctl-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature name='skip-l1dfl-vmentry'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature name='mds-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature name='pschange-mc-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <pages unit='KiB' size='4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <pages unit='KiB' size='2048'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <pages unit='KiB' size='1048576'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </cpu>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <power_management>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <suspend_mem/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </power_management>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <iommu support='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <migration_features>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <live/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <uri_transports>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <uri_transport>tcp</uri_transport>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <uri_transport>rdma</uri_transport>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </uri_transports>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </migration_features>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <topology>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <cells num='1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <cell id='0'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:           <memory unit='KiB'>7864300</memory>
Feb 01 15:06:41 compute-0 nova_compute[238794]:           <pages unit='KiB' size='4'>1966075</pages>
Feb 01 15:06:41 compute-0 nova_compute[238794]:           <pages unit='KiB' size='2048'>0</pages>
Feb 01 15:06:41 compute-0 nova_compute[238794]:           <pages unit='KiB' size='1048576'>0</pages>
Feb 01 15:06:41 compute-0 nova_compute[238794]:           <distances>
Feb 01 15:06:41 compute-0 nova_compute[238794]:             <sibling id='0' value='10'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:           </distances>
Feb 01 15:06:41 compute-0 nova_compute[238794]:           <cpus num='8'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:           </cpus>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         </cell>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </cells>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </topology>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <cache>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </cache>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <secmodel>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model>selinux</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <doi>0</doi>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </secmodel>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <secmodel>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model>dac</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <doi>0</doi>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <baselabel type='kvm'>+107:+107</baselabel>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <baselabel type='qemu'>+107:+107</baselabel>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </secmodel>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   </host>
Feb 01 15:06:41 compute-0 nova_compute[238794]: 
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <guest>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <os_type>hvm</os_type>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <arch name='i686'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <wordsize>32</wordsize>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <domain type='qemu'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <domain type='kvm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </arch>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <features>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <pae/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <nonpae/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <acpi default='on' toggle='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <apic default='on' toggle='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <cpuselection/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <deviceboot/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <disksnapshot default='on' toggle='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <externalSnapshot/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </features>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   </guest>
Feb 01 15:06:41 compute-0 nova_compute[238794]: 
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <guest>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <os_type>hvm</os_type>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <arch name='x86_64'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <wordsize>64</wordsize>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <domain type='qemu'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <domain type='kvm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </arch>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <features>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <acpi default='on' toggle='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <apic default='on' toggle='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <cpuselection/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <deviceboot/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <disksnapshot default='on' toggle='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <externalSnapshot/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </features>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   </guest>
Feb 01 15:06:41 compute-0 nova_compute[238794]: 
Feb 01 15:06:41 compute-0 nova_compute[238794]: </capabilities>
Feb 01 15:06:41 compute-0 nova_compute[238794]: 
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.176 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.181 238798 WARNING nova.virt.libvirt.driver [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.181 238798 DEBUG nova.virt.libvirt.volume.mount [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.204 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Feb 01 15:06:41 compute-0 nova_compute[238794]: <domainCapabilities>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <path>/usr/libexec/qemu-kvm</path>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <domain>kvm</domain>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <arch>i686</arch>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <vcpu max='240'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <iothreads supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <os supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <enum name='firmware'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <loader supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='type'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>rom</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>pflash</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='readonly'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>yes</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>no</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='secure'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>no</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </loader>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   </os>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <cpu>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <mode name='host-passthrough' supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='hostPassthroughMigratable'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>on</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>off</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </mode>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <mode name='maximum' supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='maximumMigratable'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>on</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>off</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </mode>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <mode name='host-model' supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <vendor>AMD</vendor>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='x2apic'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='tsc-deadline'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='hypervisor'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='tsc_adjust'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='spec-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='stibp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='ssbd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='cmp_legacy'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='overflow-recov'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='succor'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='amd-ssbd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='virt-ssbd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='lbrv'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='tsc-scale'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='vmcb-clean'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='flushbyasid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='pause-filter'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='pfthreshold'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='svme-addr-chk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='disable' name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </mode>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <mode name='custom' supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-noTSX'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-v5'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='ClearwaterForest'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ddpd-u'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='intel-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ipred-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='lam'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rrsba-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sha512'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sm3'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sm4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='ClearwaterForest-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ddpd-u'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='intel-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ipred-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='lam'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rrsba-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sha512'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sm3'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sm4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cooperlake'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cooperlake-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cooperlake-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Denverton'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mpx'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Denverton-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mpx'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Denverton-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Denverton-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Dhyana-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Genoa'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='auto-ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Genoa-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='auto-ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Genoa-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='auto-ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fs-gs-base-ns'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='perfmon-v2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Milan'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Milan-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Milan-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Milan-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Rome'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Rome-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Rome-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Rome-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Turin'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='auto-ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vp2intersect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fs-gs-base-ns'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibpb-brtype'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='perfmon-v2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbpb'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='srso-user-kernel-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Turin-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='auto-ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vp2intersect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fs-gs-base-ns'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibpb-brtype'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='perfmon-v2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbpb'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='srso-user-kernel-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-v5'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='GraniteRapids'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='GraniteRapids-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='GraniteRapids-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-128'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-256'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-512'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='GraniteRapids-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-128'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-256'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-512'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-noTSX'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-noTSX'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v5'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v6'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v7'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='IvyBridge'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='IvyBridge-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='IvyBridge-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='IvyBridge-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='KnightsMill'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-4fmaps'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-4vnniw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512er'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512pf'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='KnightsMill-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-4fmaps'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-4vnniw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512er'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512pf'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Opteron_G4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fma4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xop'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Opteron_G4-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fma4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xop'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Opteron_G5'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fma4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tbm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xop'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Opteron_G5-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fma4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tbm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xop'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SapphireRapids'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SapphireRapids-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SapphireRapids-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SapphireRapids-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SapphireRapids-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SierraForest'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SierraForest-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SierraForest-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='intel-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ipred-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='lam'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rrsba-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SierraForest-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='intel-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ipred-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='lam'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rrsba-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-v5'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Snowridge'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='core-capability'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mpx'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='split-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Snowridge-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='core-capability'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mpx'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='split-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Snowridge-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='core-capability'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='split-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Snowridge-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='core-capability'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='split-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Snowridge-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='athlon'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnow'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnowext'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='athlon-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnow'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnowext'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='core2duo'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='core2duo-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='coreduo'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='coreduo-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='n270'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='n270-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='phenom'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnow'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnowext'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='phenom-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnow'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnowext'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </mode>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   </cpu>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <memoryBacking supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <enum name='sourceType'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <value>file</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <value>anonymous</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <value>memfd</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   </memoryBacking>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <devices>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <disk supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='diskDevice'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>disk</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>cdrom</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>floppy</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>lun</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='bus'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>ide</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>fdc</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>scsi</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>usb</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>sata</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='model'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio-transitional</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio-non-transitional</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </disk>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <graphics supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='type'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vnc</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>egl-headless</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>dbus</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </graphics>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <video supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='modelType'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vga</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>cirrus</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>none</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>bochs</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>ramfb</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </video>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <hostdev supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='mode'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>subsystem</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='startupPolicy'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>default</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>mandatory</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>requisite</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>optional</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='subsysType'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>usb</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>pci</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>scsi</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='capsType'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='pciBackend'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </hostdev>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <rng supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='model'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio-transitional</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio-non-transitional</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='backendModel'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>random</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>egd</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>builtin</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </rng>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <filesystem supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='driverType'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>path</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>handle</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtiofs</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </filesystem>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <tpm supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='model'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>tpm-tis</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>tpm-crb</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='backendModel'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>emulator</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>external</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='backendVersion'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>2.0</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </tpm>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <redirdev supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='bus'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>usb</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </redirdev>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <channel supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='type'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>pty</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>unix</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </channel>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <crypto supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='model'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='type'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>qemu</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='backendModel'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>builtin</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </crypto>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <interface supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='backendType'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>default</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>passt</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </interface>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <panic supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='model'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>isa</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>hyperv</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </panic>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <console supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='type'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>null</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vc</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>pty</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>dev</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>file</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>pipe</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>stdio</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>udp</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>tcp</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>unix</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>qemu-vdagent</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>dbus</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </console>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   </devices>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <features>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <gic supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <vmcoreinfo supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <genid supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <backingStoreInput supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <backup supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <async-teardown supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <s390-pv supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <ps2 supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <tdx supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <sev supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <sgx supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <hyperv supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='features'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>relaxed</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vapic</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>spinlocks</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vpindex</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>runtime</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>synic</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>stimer</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>reset</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vendor_id</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>frequencies</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>reenlightenment</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>tlbflush</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>ipi</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>avic</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>emsr_bitmap</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>xmm_input</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <defaults>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <spinlocks>4095</spinlocks>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <stimer_direct>on</stimer_direct>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <tlbflush_direct>on</tlbflush_direct>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <tlbflush_extended>on</tlbflush_extended>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </defaults>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </hyperv>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <launchSecurity supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   </features>
Feb 01 15:06:41 compute-0 nova_compute[238794]: </domainCapabilities>
Feb 01 15:06:41 compute-0 nova_compute[238794]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.211 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Feb 01 15:06:41 compute-0 nova_compute[238794]: <domainCapabilities>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <path>/usr/libexec/qemu-kvm</path>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <domain>kvm</domain>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <arch>i686</arch>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <vcpu max='4096'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <iothreads supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <os supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <enum name='firmware'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <loader supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='type'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>rom</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>pflash</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='readonly'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>yes</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>no</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='secure'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>no</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </loader>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   </os>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <cpu>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <mode name='host-passthrough' supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='hostPassthroughMigratable'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>on</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>off</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </mode>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <mode name='maximum' supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='maximumMigratable'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>on</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>off</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </mode>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <mode name='host-model' supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <vendor>AMD</vendor>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='x2apic'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='tsc-deadline'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='hypervisor'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='tsc_adjust'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='spec-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='stibp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='ssbd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='cmp_legacy'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='overflow-recov'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='succor'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='amd-ssbd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='virt-ssbd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='lbrv'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='tsc-scale'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='vmcb-clean'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='flushbyasid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='pause-filter'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='pfthreshold'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='svme-addr-chk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='disable' name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </mode>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <mode name='custom' supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-noTSX'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-v5'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='ClearwaterForest'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ddpd-u'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='intel-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ipred-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='lam'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rrsba-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sha512'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sm3'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sm4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='ClearwaterForest-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ddpd-u'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='intel-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ipred-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='lam'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rrsba-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sha512'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sm3'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sm4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cooperlake'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cooperlake-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cooperlake-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Denverton'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mpx'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Denverton-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mpx'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Denverton-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Denverton-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Dhyana-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Genoa'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='auto-ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Genoa-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='auto-ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Genoa-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='auto-ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fs-gs-base-ns'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='perfmon-v2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Milan'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Milan-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Milan-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Milan-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Rome'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Rome-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Rome-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Rome-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Turin'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='auto-ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vp2intersect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fs-gs-base-ns'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibpb-brtype'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='perfmon-v2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbpb'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='srso-user-kernel-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Turin-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='auto-ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vp2intersect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fs-gs-base-ns'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibpb-brtype'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='perfmon-v2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbpb'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='srso-user-kernel-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-v5'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='GraniteRapids'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='GraniteRapids-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='GraniteRapids-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-128'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-256'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-512'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='GraniteRapids-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-128'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-256'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-512'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-noTSX'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-noTSX'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v5'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v6'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v7'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='IvyBridge'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='IvyBridge-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='IvyBridge-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='IvyBridge-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='KnightsMill'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-4fmaps'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-4vnniw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512er'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512pf'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='KnightsMill-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-4fmaps'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-4vnniw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512er'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512pf'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Opteron_G4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fma4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xop'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Opteron_G4-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fma4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xop'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Opteron_G5'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fma4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tbm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xop'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Opteron_G5-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fma4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tbm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xop'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SapphireRapids'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SapphireRapids-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SapphireRapids-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SapphireRapids-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SapphireRapids-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SierraForest'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SierraForest-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SierraForest-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='intel-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ipred-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='lam'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rrsba-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SierraForest-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='intel-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ipred-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='lam'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rrsba-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-v5'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Snowridge'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='core-capability'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mpx'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='split-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Snowridge-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='core-capability'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mpx'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='split-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Snowridge-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='core-capability'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='split-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Snowridge-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='core-capability'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='split-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Snowridge-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='athlon'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnow'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnowext'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='athlon-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnow'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnowext'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='core2duo'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='core2duo-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='coreduo'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='coreduo-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='n270'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='n270-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='phenom'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnow'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnowext'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='phenom-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnow'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnowext'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </mode>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   </cpu>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <memoryBacking supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <enum name='sourceType'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <value>file</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <value>anonymous</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <value>memfd</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   </memoryBacking>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <devices>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <disk supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='diskDevice'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>disk</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>cdrom</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>floppy</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>lun</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='bus'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>fdc</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>scsi</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>usb</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>sata</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='model'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio-transitional</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio-non-transitional</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </disk>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <graphics supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='type'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vnc</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>egl-headless</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>dbus</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </graphics>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <video supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='modelType'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vga</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>cirrus</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>none</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>bochs</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>ramfb</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </video>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <hostdev supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='mode'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>subsystem</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='startupPolicy'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>default</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>mandatory</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>requisite</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>optional</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='subsysType'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>usb</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>pci</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>scsi</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='capsType'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='pciBackend'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </hostdev>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <rng supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='model'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio-transitional</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio-non-transitional</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='backendModel'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>random</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>egd</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>builtin</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </rng>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <filesystem supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='driverType'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>path</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>handle</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtiofs</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </filesystem>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <tpm supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='model'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>tpm-tis</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>tpm-crb</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='backendModel'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>emulator</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>external</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='backendVersion'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>2.0</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </tpm>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <redirdev supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='bus'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>usb</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </redirdev>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <channel supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='type'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>pty</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>unix</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </channel>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <crypto supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='model'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='type'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>qemu</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='backendModel'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>builtin</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </crypto>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <interface supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='backendType'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>default</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>passt</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </interface>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <panic supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='model'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>isa</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>hyperv</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </panic>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <console supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='type'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>null</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vc</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>pty</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>dev</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>file</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>pipe</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>stdio</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>udp</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>tcp</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>unix</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>qemu-vdagent</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>dbus</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </console>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   </devices>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <features>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <gic supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <vmcoreinfo supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <genid supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <backingStoreInput supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <backup supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <async-teardown supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <s390-pv supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <ps2 supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <tdx supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <sev supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <sgx supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <hyperv supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='features'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>relaxed</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vapic</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>spinlocks</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vpindex</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>runtime</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>synic</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>stimer</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>reset</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vendor_id</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>frequencies</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>reenlightenment</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>tlbflush</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>ipi</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>avic</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>emsr_bitmap</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>xmm_input</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <defaults>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <spinlocks>4095</spinlocks>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <stimer_direct>on</stimer_direct>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <tlbflush_direct>on</tlbflush_direct>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <tlbflush_extended>on</tlbflush_extended>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </defaults>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </hyperv>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <launchSecurity supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   </features>
Feb 01 15:06:41 compute-0 nova_compute[238794]: </domainCapabilities>
Feb 01 15:06:41 compute-0 nova_compute[238794]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.265 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.271 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Feb 01 15:06:41 compute-0 nova_compute[238794]: <domainCapabilities>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <path>/usr/libexec/qemu-kvm</path>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <domain>kvm</domain>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <arch>x86_64</arch>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <vcpu max='240'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <iothreads supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <os supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <enum name='firmware'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <loader supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='type'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>rom</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>pflash</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='readonly'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>yes</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>no</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='secure'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>no</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </loader>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   </os>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <cpu>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <mode name='host-passthrough' supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='hostPassthroughMigratable'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>on</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>off</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </mode>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <mode name='maximum' supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='maximumMigratable'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>on</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>off</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </mode>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <mode name='host-model' supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <vendor>AMD</vendor>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='x2apic'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='tsc-deadline'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='hypervisor'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='tsc_adjust'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='spec-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='stibp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='ssbd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='cmp_legacy'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='overflow-recov'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='succor'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='amd-ssbd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='virt-ssbd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='lbrv'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='tsc-scale'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='vmcb-clean'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='flushbyasid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='pause-filter'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='pfthreshold'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='svme-addr-chk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='disable' name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </mode>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <mode name='custom' supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-noTSX'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-v5'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='ClearwaterForest'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ddpd-u'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='intel-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ipred-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='lam'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rrsba-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sha512'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sm3'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sm4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='ClearwaterForest-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ddpd-u'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='intel-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ipred-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='lam'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rrsba-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sha512'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sm3'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sm4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cooperlake'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cooperlake-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cooperlake-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Denverton'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mpx'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Denverton-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mpx'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Denverton-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Denverton-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Dhyana-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Genoa'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='auto-ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Genoa-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='auto-ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Genoa-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='auto-ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fs-gs-base-ns'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='perfmon-v2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Milan'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Milan-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Milan-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Milan-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Rome'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Rome-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Rome-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Rome-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Turin'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='auto-ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vp2intersect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fs-gs-base-ns'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibpb-brtype'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='perfmon-v2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbpb'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='srso-user-kernel-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Turin-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='auto-ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vp2intersect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fs-gs-base-ns'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibpb-brtype'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='perfmon-v2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbpb'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='srso-user-kernel-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-v5'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='GraniteRapids'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='GraniteRapids-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='GraniteRapids-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-128'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-256'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-512'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='GraniteRapids-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-128'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-256'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-512'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-noTSX'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-noTSX'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v5'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v6'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v7'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='IvyBridge'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='IvyBridge-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='IvyBridge-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='IvyBridge-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='KnightsMill'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-4fmaps'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-4vnniw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512er'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512pf'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='KnightsMill-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-4fmaps'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-4vnniw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512er'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512pf'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Opteron_G4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fma4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xop'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Opteron_G4-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fma4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xop'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Opteron_G5'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fma4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tbm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xop'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Opteron_G5-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fma4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tbm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xop'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SapphireRapids'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SapphireRapids-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SapphireRapids-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SapphireRapids-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SapphireRapids-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SierraForest'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SierraForest-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SierraForest-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='intel-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ipred-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='lam'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rrsba-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SierraForest-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='intel-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ipred-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='lam'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rrsba-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-v5'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Snowridge'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='core-capability'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mpx'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='split-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Snowridge-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='core-capability'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mpx'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='split-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Snowridge-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='core-capability'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='split-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Snowridge-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='core-capability'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='split-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Snowridge-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='athlon'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnow'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnowext'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='athlon-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnow'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnowext'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='core2duo'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='core2duo-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='coreduo'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='coreduo-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='n270'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='n270-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='phenom'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnow'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnowext'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='phenom-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnow'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnowext'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </mode>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   </cpu>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <memoryBacking supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <enum name='sourceType'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <value>file</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <value>anonymous</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <value>memfd</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   </memoryBacking>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <devices>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <disk supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='diskDevice'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>disk</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>cdrom</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>floppy</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>lun</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='bus'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>ide</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>fdc</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>scsi</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>usb</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>sata</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='model'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio-transitional</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio-non-transitional</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </disk>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <graphics supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='type'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vnc</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>egl-headless</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>dbus</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </graphics>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <video supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='modelType'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vga</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>cirrus</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>none</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>bochs</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>ramfb</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </video>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <hostdev supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='mode'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>subsystem</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='startupPolicy'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>default</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>mandatory</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>requisite</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>optional</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='subsysType'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>usb</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>pci</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>scsi</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='capsType'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='pciBackend'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </hostdev>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <rng supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='model'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio-transitional</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio-non-transitional</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='backendModel'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>random</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>egd</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>builtin</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </rng>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <filesystem supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='driverType'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>path</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>handle</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtiofs</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </filesystem>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <tpm supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='model'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>tpm-tis</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>tpm-crb</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='backendModel'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>emulator</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>external</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='backendVersion'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>2.0</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </tpm>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <redirdev supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='bus'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>usb</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </redirdev>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <channel supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='type'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>pty</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>unix</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </channel>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <crypto supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='model'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='type'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>qemu</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='backendModel'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>builtin</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </crypto>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <interface supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='backendType'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>default</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>passt</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </interface>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <panic supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='model'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>isa</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>hyperv</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </panic>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <console supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='type'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>null</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vc</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>pty</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>dev</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>file</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>pipe</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>stdio</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>udp</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>tcp</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>unix</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>qemu-vdagent</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>dbus</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </console>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   </devices>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <features>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <gic supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <vmcoreinfo supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <genid supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <backingStoreInput supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <backup supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <async-teardown supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <s390-pv supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <ps2 supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <tdx supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <sev supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <sgx supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <hyperv supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='features'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>relaxed</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vapic</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>spinlocks</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vpindex</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>runtime</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>synic</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>stimer</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>reset</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vendor_id</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>frequencies</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>reenlightenment</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>tlbflush</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>ipi</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>avic</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>emsr_bitmap</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>xmm_input</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <defaults>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <spinlocks>4095</spinlocks>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <stimer_direct>on</stimer_direct>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <tlbflush_direct>on</tlbflush_direct>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <tlbflush_extended>on</tlbflush_extended>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </defaults>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </hyperv>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <launchSecurity supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   </features>
Feb 01 15:06:41 compute-0 nova_compute[238794]: </domainCapabilities>
Feb 01 15:06:41 compute-0 nova_compute[238794]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.348 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Feb 01 15:06:41 compute-0 nova_compute[238794]: <domainCapabilities>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <path>/usr/libexec/qemu-kvm</path>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <domain>kvm</domain>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <arch>x86_64</arch>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <vcpu max='4096'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <iothreads supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <os supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <enum name='firmware'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <value>efi</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <loader supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='type'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>rom</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>pflash</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='readonly'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>yes</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>no</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='secure'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>yes</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>no</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </loader>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   </os>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <cpu>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <mode name='host-passthrough' supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='hostPassthroughMigratable'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>on</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>off</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </mode>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <mode name='maximum' supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='maximumMigratable'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>on</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>off</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </mode>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <mode name='host-model' supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <vendor>AMD</vendor>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='x2apic'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='tsc-deadline'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='hypervisor'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='tsc_adjust'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='spec-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='stibp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='ssbd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='cmp_legacy'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='overflow-recov'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='succor'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='amd-ssbd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='virt-ssbd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='lbrv'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='tsc-scale'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='vmcb-clean'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='flushbyasid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='pause-filter'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='pfthreshold'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='svme-addr-chk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <feature policy='disable' name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </mode>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <mode name='custom' supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-noTSX'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Broadwell-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cascadelake-Server-v5'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='ClearwaterForest'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ddpd-u'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='intel-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ipred-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='lam'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rrsba-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sha512'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sm3'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sm4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='ClearwaterForest-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ddpd-u'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='intel-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ipred-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='lam'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rrsba-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sha512'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sm3'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sm4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cooperlake'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cooperlake-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Cooperlake-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Denverton'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mpx'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Denverton-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mpx'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Denverton-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Denverton-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Dhyana-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Genoa'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='auto-ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Genoa-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='auto-ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Genoa-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='auto-ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fs-gs-base-ns'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='perfmon-v2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Milan'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Milan-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Milan-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Milan-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Rome'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Rome-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Rome-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Rome-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Turin'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='auto-ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vp2intersect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fs-gs-base-ns'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibpb-brtype'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='perfmon-v2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbpb'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='srso-user-kernel-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-Turin-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amd-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='auto-ibrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vp2intersect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fs-gs-base-ns'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibpb-brtype'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='no-nested-data-bp'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='null-sel-clr-base'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='perfmon-v2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbpb'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='srso-user-kernel-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='stibp-always-on'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='EPYC-v5'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='GraniteRapids'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='GraniteRapids-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='GraniteRapids-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-128'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-256'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-512'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='GraniteRapids-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-128'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-256'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx10-512'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='prefetchiti'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-noTSX'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Haswell-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-noTSX'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v5'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v6'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Icelake-Server-v7'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='IvyBridge'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='IvyBridge-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='IvyBridge-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='IvyBridge-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='KnightsMill'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-4fmaps'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-4vnniw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512er'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512pf'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='KnightsMill-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-4fmaps'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-4vnniw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512er'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512pf'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Opteron_G4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fma4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xop'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Opteron_G4-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fma4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xop'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Opteron_G5'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fma4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tbm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xop'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Opteron_G5-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fma4'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tbm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xop'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SapphireRapids'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SapphireRapids-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SapphireRapids-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SapphireRapids-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SapphireRapids-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='amx-tile'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-bf16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-fp16'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512-vpopcntdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bitalg'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vbmi2'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrc'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fzrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='la57'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='taa-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='tsx-ldtrk'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SierraForest'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SierraForest-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SierraForest-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='intel-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ipred-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='lam'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rrsba-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='SierraForest-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ifma'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-ne-convert'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx-vnni-int8'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bhi-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='bus-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cmpccxadd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fbsdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='fsrs'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ibrs-all'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='intel-psfd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ipred-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='lam'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mcdt-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pbrsb-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='psdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rrsba-ctrl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='sbdr-ssdp-no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='serialize'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vaes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='vpclmulqdq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Client-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='hle'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='rtm'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Skylake-Server-v5'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512bw'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512cd'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512dq'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512f'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='avx512vl'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='invpcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pcid'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='pku'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Snowridge'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='core-capability'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mpx'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='split-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Snowridge-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='core-capability'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='mpx'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='split-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Snowridge-v2'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='core-capability'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='split-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Snowridge-v3'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='core-capability'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='split-lock-detect'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='Snowridge-v4'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='cldemote'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='erms'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='gfni'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdir64b'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='movdiri'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='xsaves'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='athlon'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnow'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnowext'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='athlon-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnow'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnowext'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='core2duo'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='core2duo-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='coreduo'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='coreduo-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='n270'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='n270-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='ss'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='phenom'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnow'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnowext'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <blockers model='phenom-v1'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnow'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <feature name='3dnowext'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </blockers>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </mode>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   </cpu>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <memoryBacking supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <enum name='sourceType'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <value>file</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <value>anonymous</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <value>memfd</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   </memoryBacking>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <devices>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <disk supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='diskDevice'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>disk</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>cdrom</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>floppy</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>lun</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='bus'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>fdc</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>scsi</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>usb</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>sata</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='model'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio-transitional</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio-non-transitional</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </disk>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <graphics supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='type'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vnc</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>egl-headless</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>dbus</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </graphics>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <video supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='modelType'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vga</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>cirrus</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>none</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>bochs</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>ramfb</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </video>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <hostdev supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='mode'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>subsystem</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='startupPolicy'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>default</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>mandatory</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>requisite</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>optional</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='subsysType'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>usb</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>pci</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>scsi</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='capsType'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='pciBackend'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </hostdev>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <rng supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='model'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio-transitional</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtio-non-transitional</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='backendModel'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>random</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>egd</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>builtin</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </rng>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <filesystem supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='driverType'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>path</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>handle</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>virtiofs</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </filesystem>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <tpm supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='model'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>tpm-tis</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>tpm-crb</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='backendModel'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>emulator</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>external</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='backendVersion'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>2.0</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </tpm>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <redirdev supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='bus'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>usb</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </redirdev>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <channel supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='type'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>pty</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>unix</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </channel>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <crypto supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='model'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='type'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>qemu</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='backendModel'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>builtin</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </crypto>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <interface supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='backendType'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>default</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>passt</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </interface>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <panic supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='model'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>isa</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>hyperv</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </panic>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <console supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='type'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>null</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vc</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>pty</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>dev</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>file</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>pipe</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>stdio</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>udp</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>tcp</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>unix</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>qemu-vdagent</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>dbus</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </console>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   </devices>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   <features>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <gic supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <vmcoreinfo supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <genid supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <backingStoreInput supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <backup supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <async-teardown supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <s390-pv supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <ps2 supported='yes'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <tdx supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <sev supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <sgx supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <hyperv supported='yes'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <enum name='features'>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>relaxed</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vapic</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>spinlocks</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vpindex</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>runtime</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>synic</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>stimer</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>reset</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>vendor_id</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>frequencies</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>reenlightenment</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>tlbflush</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>ipi</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>avic</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>emsr_bitmap</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <value>xmm_input</value>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </enum>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       <defaults>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <spinlocks>4095</spinlocks>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <stimer_direct>on</stimer_direct>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <tlbflush_direct>on</tlbflush_direct>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <tlbflush_extended>on</tlbflush_extended>
Feb 01 15:06:41 compute-0 nova_compute[238794]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 01 15:06:41 compute-0 nova_compute[238794]:       </defaults>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     </hyperv>
Feb 01 15:06:41 compute-0 nova_compute[238794]:     <launchSecurity supported='no'/>
Feb 01 15:06:41 compute-0 nova_compute[238794]:   </features>
Feb 01 15:06:41 compute-0 nova_compute[238794]: </domainCapabilities>
Feb 01 15:06:41 compute-0 nova_compute[238794]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.442 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.443 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.443 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.452 238798 INFO nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Secure Boot support detected
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.455 238798 INFO nova.virt.libvirt.driver [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.455 238798 INFO nova.virt.libvirt.driver [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.463 238798 DEBUG nova.virt.libvirt.driver [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.573 238798 INFO nova.virt.node [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Determined node identity 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 from /var/lib/nova/compute_id
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.599 238798 WARNING nova.compute.manager [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Compute nodes ['1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.641 238798 INFO nova.compute.manager [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.687 238798 WARNING nova.compute.manager [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.687 238798 DEBUG oslo_concurrency.lockutils [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.687 238798 DEBUG oslo_concurrency.lockutils [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.687 238798 DEBUG oslo_concurrency.lockutils [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.688 238798 DEBUG nova.compute.resource_tracker [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 01 15:06:41 compute-0 nova_compute[238794]: 2026-02-01 15:06:41.688 238798 DEBUG oslo_concurrency.processutils [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:06:41 compute-0 podman[239115]: 2026-02-01 15:06:41.970269324 +0000 UTC m=+0.057646717 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 01 15:06:42 compute-0 podman[239116]: 2026-02-01 15:06:42.033027233 +0000 UTC m=+0.116053794 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 01 15:06:42 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:42 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:06:42 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3094871060' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:06:42 compute-0 nova_compute[238794]: 2026-02-01 15:06:42.228 238798 DEBUG oslo_concurrency.processutils [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:06:42 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Feb 01 15:06:42 compute-0 systemd[1]: Started libvirt nodedev daemon.
Feb 01 15:06:42 compute-0 nova_compute[238794]: 2026-02-01 15:06:42.537 238798 WARNING nova.virt.libvirt.driver [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 01 15:06:42 compute-0 nova_compute[238794]: 2026-02-01 15:06:42.539 238798 DEBUG nova.compute.resource_tracker [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5080MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 01 15:06:42 compute-0 nova_compute[238794]: 2026-02-01 15:06:42.540 238798 DEBUG oslo_concurrency.lockutils [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:06:42 compute-0 nova_compute[238794]: 2026-02-01 15:06:42.540 238798 DEBUG oslo_concurrency.lockutils [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:06:42 compute-0 nova_compute[238794]: 2026-02-01 15:06:42.590 238798 WARNING nova.compute.resource_tracker [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] No compute node record for compute-0.ctlplane.example.com:1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 could not be found.
Feb 01 15:06:42 compute-0 nova_compute[238794]: 2026-02-01 15:06:42.627 238798 INFO nova.compute.resource_tracker [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18
Feb 01 15:06:42 compute-0 nova_compute[238794]: 2026-02-01 15:06:42.699 238798 DEBUG nova.compute.resource_tracker [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 01 15:06:42 compute-0 nova_compute[238794]: 2026-02-01 15:06:42.699 238798 DEBUG nova.compute.resource_tracker [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 01 15:06:43 compute-0 ceph-mon[75179]: pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:43 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3094871060' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:06:43 compute-0 nova_compute[238794]: 2026-02-01 15:06:43.587 238798 INFO nova.scheduler.client.report [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] [req-3102e39e-46ff-4296-8902-516294c380d5] Created resource provider record via placement API for resource provider with UUID 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 and name compute-0.ctlplane.example.com.
Feb 01 15:06:43 compute-0 nova_compute[238794]: 2026-02-01 15:06:43.975 238798 DEBUG oslo_concurrency.processutils [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:06:44 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:44 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:06:44 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1619802062' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:06:44 compute-0 nova_compute[238794]: 2026-02-01 15:06:44.525 238798 DEBUG oslo_concurrency.processutils [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:06:44 compute-0 nova_compute[238794]: 2026-02-01 15:06:44.530 238798 DEBUG nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Feb 01 15:06:44 compute-0 nova_compute[238794]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Feb 01 15:06:44 compute-0 nova_compute[238794]: 2026-02-01 15:06:44.530 238798 INFO nova.virt.libvirt.host [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] kernel doesn't support AMD SEV
Feb 01 15:06:44 compute-0 nova_compute[238794]: 2026-02-01 15:06:44.531 238798 DEBUG nova.compute.provider_tree [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Updating inventory in ProviderTree for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 01 15:06:44 compute-0 nova_compute[238794]: 2026-02-01 15:06:44.532 238798 DEBUG nova.virt.libvirt.driver [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 01 15:06:44 compute-0 nova_compute[238794]: 2026-02-01 15:06:44.615 238798 DEBUG nova.scheduler.client.report [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Updated inventory for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Feb 01 15:06:44 compute-0 nova_compute[238794]: 2026-02-01 15:06:44.615 238798 DEBUG nova.compute.provider_tree [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Updating resource provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Feb 01 15:06:44 compute-0 nova_compute[238794]: 2026-02-01 15:06:44.616 238798 DEBUG nova.compute.provider_tree [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Updating inventory in ProviderTree for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 01 15:06:44 compute-0 nova_compute[238794]: 2026-02-01 15:06:44.781 238798 DEBUG nova.compute.provider_tree [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Updating resource provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Feb 01 15:06:44 compute-0 nova_compute[238794]: 2026-02-01 15:06:44.811 238798 DEBUG nova.compute.resource_tracker [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 01 15:06:44 compute-0 nova_compute[238794]: 2026-02-01 15:06:44.811 238798 DEBUG oslo_concurrency.lockutils [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.271s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:06:44 compute-0 nova_compute[238794]: 2026-02-01 15:06:44.812 238798 DEBUG nova.service [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Feb 01 15:06:45 compute-0 nova_compute[238794]: 2026-02-01 15:06:45.021 238798 DEBUG nova.service [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Feb 01 15:06:45 compute-0 nova_compute[238794]: 2026-02-01 15:06:45.021 238798 DEBUG nova.servicegroup.drivers.db [None req-452ece86-1b69-4f47-ab9b-5f6ff7fcff8e - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Feb 01 15:06:45 compute-0 ceph-mon[75179]: pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:45 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1619802062' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:06:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:06:46 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:47 compute-0 ceph-mon[75179]: pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:06:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:06:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:06:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:06:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:06:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:06:49 compute-0 ceph-mon[75179]: pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:50 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:06:51 compute-0 ceph-mon[75179]: pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:52 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:53 compute-0 ceph-mon[75179]: pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:54 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:55 compute-0 ceph-mon[75179]: pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:06:56 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:57 compute-0 ceph-mon[75179]: pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:58 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:06:59 compute-0 ceph-mon[75179]: pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:00 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:07:01 compute-0 ceph-mon[75179]: pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:02 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 01 15:07:03 compute-0 sudo[239205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:07:03 compute-0 sudo[239205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:07:03 compute-0 sudo[239205]: pam_unix(sudo:session): session closed for user root
Feb 01 15:07:03 compute-0 sudo[239230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 15:07:03 compute-0 sudo[239230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:07:03 compute-0 ceph-mon[75179]: pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 01 15:07:03 compute-0 sudo[239230]: pam_unix(sudo:session): session closed for user root
Feb 01 15:07:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:07:03 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:07:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 15:07:03 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:07:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 15:07:03 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:07:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 15:07:03 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:07:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 15:07:03 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:07:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:07:03 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:07:03 compute-0 sudo[239285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:07:03 compute-0 sudo[239285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:07:03 compute-0 sudo[239285]: pam_unix(sudo:session): session closed for user root
Feb 01 15:07:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 01 15:07:03 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3609046524' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:07:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 01 15:07:03 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3609046524' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:07:03 compute-0 sudo[239310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 15:07:03 compute-0 sudo[239310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:07:04 compute-0 podman[239347]: 2026-02-01 15:07:04.052196397 +0000 UTC m=+0.043693416 container create 5c00004424adca4e610343173b669a7111543759bb0aec2061b2b08b7c896fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Feb 01 15:07:04 compute-0 systemd[1]: Started libpod-conmon-5c00004424adca4e610343173b669a7111543759bb0aec2061b2b08b7c896fb0.scope.
Feb 01 15:07:04 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:07:04 compute-0 podman[239347]: 2026-02-01 15:07:04.11610526 +0000 UTC m=+0.107602299 container init 5c00004424adca4e610343173b669a7111543759bb0aec2061b2b08b7c896fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curie, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:07:04 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 01 15:07:04 compute-0 podman[239347]: 2026-02-01 15:07:04.121217983 +0000 UTC m=+0.112715002 container start 5c00004424adca4e610343173b669a7111543759bb0aec2061b2b08b7c896fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 01 15:07:04 compute-0 podman[239347]: 2026-02-01 15:07:04.123872878 +0000 UTC m=+0.115369937 container attach 5c00004424adca4e610343173b669a7111543759bb0aec2061b2b08b7c896fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Feb 01 15:07:04 compute-0 wizardly_curie[239364]: 167 167
Feb 01 15:07:04 compute-0 systemd[1]: libpod-5c00004424adca4e610343173b669a7111543759bb0aec2061b2b08b7c896fb0.scope: Deactivated successfully.
Feb 01 15:07:04 compute-0 podman[239347]: 2026-02-01 15:07:04.030743306 +0000 UTC m=+0.022240365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:07:04 compute-0 podman[239347]: 2026-02-01 15:07:04.125969407 +0000 UTC m=+0.117466406 container died 5c00004424adca4e610343173b669a7111543759bb0aec2061b2b08b7c896fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 01 15:07:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-13f38c7df6839b8799cd7c419f8378300138bdd745228325af0dbcb37adaaf7f-merged.mount: Deactivated successfully.
Feb 01 15:07:04 compute-0 podman[239347]: 2026-02-01 15:07:04.165210807 +0000 UTC m=+0.156707826 container remove 5c00004424adca4e610343173b669a7111543759bb0aec2061b2b08b7c896fb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 01 15:07:04 compute-0 systemd[1]: libpod-conmon-5c00004424adca4e610343173b669a7111543759bb0aec2061b2b08b7c896fb0.scope: Deactivated successfully.
Feb 01 15:07:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:07:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:07:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:07:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:07:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:07:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:07:04 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3609046524' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:07:04 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3609046524' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:07:04 compute-0 podman[239389]: 2026-02-01 15:07:04.297608981 +0000 UTC m=+0.049655724 container create 15cdca24c6e68701c06f160c509d81fc20da10f3ca31cca71835cf2cc110f11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_keldysh, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:07:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 01 15:07:04 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3970080740' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:07:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 01 15:07:04 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3970080740' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:07:04 compute-0 systemd[1]: Started libpod-conmon-15cdca24c6e68701c06f160c509d81fc20da10f3ca31cca71835cf2cc110f11e.scope.
Feb 01 15:07:04 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d280a0ee537bd9a08742f3409145e2a0795b27fc81bd17f7732765ed3e6bdcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d280a0ee537bd9a08742f3409145e2a0795b27fc81bd17f7732765ed3e6bdcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d280a0ee537bd9a08742f3409145e2a0795b27fc81bd17f7732765ed3e6bdcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d280a0ee537bd9a08742f3409145e2a0795b27fc81bd17f7732765ed3e6bdcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d280a0ee537bd9a08742f3409145e2a0795b27fc81bd17f7732765ed3e6bdcb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 15:07:04 compute-0 podman[239389]: 2026-02-01 15:07:04.280742048 +0000 UTC m=+0.032788821 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:07:04 compute-0 podman[239389]: 2026-02-01 15:07:04.376780612 +0000 UTC m=+0.128827365 container init 15cdca24c6e68701c06f160c509d81fc20da10f3ca31cca71835cf2cc110f11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_keldysh, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 01 15:07:04 compute-0 podman[239389]: 2026-02-01 15:07:04.385327702 +0000 UTC m=+0.137374435 container start 15cdca24c6e68701c06f160c509d81fc20da10f3ca31cca71835cf2cc110f11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_keldysh, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 01 15:07:04 compute-0 podman[239389]: 2026-02-01 15:07:04.388936683 +0000 UTC m=+0.140983446 container attach 15cdca24c6e68701c06f160c509d81fc20da10f3ca31cca71835cf2cc110f11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:07:04 compute-0 intelligent_keldysh[239406]: --> passed data devices: 0 physical, 3 LVM
Feb 01 15:07:04 compute-0 intelligent_keldysh[239406]: --> All data devices are unavailable
Feb 01 15:07:04 compute-0 systemd[1]: libpod-15cdca24c6e68701c06f160c509d81fc20da10f3ca31cca71835cf2cc110f11e.scope: Deactivated successfully.
Feb 01 15:07:04 compute-0 podman[239389]: 2026-02-01 15:07:04.814984254 +0000 UTC m=+0.567031017 container died 15cdca24c6e68701c06f160c509d81fc20da10f3ca31cca71835cf2cc110f11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_keldysh, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:07:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 01 15:07:04 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1644827985' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:07:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 01 15:07:04 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1644827985' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:07:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d280a0ee537bd9a08742f3409145e2a0795b27fc81bd17f7732765ed3e6bdcb-merged.mount: Deactivated successfully.
Feb 01 15:07:04 compute-0 podman[239389]: 2026-02-01 15:07:04.860744397 +0000 UTC m=+0.612791130 container remove 15cdca24c6e68701c06f160c509d81fc20da10f3ca31cca71835cf2cc110f11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_keldysh, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:07:04 compute-0 systemd[1]: libpod-conmon-15cdca24c6e68701c06f160c509d81fc20da10f3ca31cca71835cf2cc110f11e.scope: Deactivated successfully.
Feb 01 15:07:04 compute-0 sudo[239310]: pam_unix(sudo:session): session closed for user root
Feb 01 15:07:04 compute-0 sudo[239437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:07:04 compute-0 sudo[239437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:07:04 compute-0 sudo[239437]: pam_unix(sudo:session): session closed for user root
Feb 01 15:07:05 compute-0 sudo[239462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 15:07:05 compute-0 sudo[239462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:07:05 compute-0 ceph-mon[75179]: pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 01 15:07:05 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3970080740' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:07:05 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3970080740' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:07:05 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/1644827985' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:07:05 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/1644827985' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:07:05 compute-0 podman[239500]: 2026-02-01 15:07:05.275037999 +0000 UTC m=+0.035467396 container create 8f9566007761adc86963893a15b650dc8435351ee82fc783328f1a4dcd5b5b44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 01 15:07:05 compute-0 systemd[1]: Started libpod-conmon-8f9566007761adc86963893a15b650dc8435351ee82fc783328f1a4dcd5b5b44.scope.
Feb 01 15:07:05 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:07:05 compute-0 podman[239500]: 2026-02-01 15:07:05.323163038 +0000 UTC m=+0.083592435 container init 8f9566007761adc86963893a15b650dc8435351ee82fc783328f1a4dcd5b5b44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:07:05 compute-0 podman[239500]: 2026-02-01 15:07:05.326875983 +0000 UTC m=+0.087305360 container start 8f9566007761adc86963893a15b650dc8435351ee82fc783328f1a4dcd5b5b44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 01 15:07:05 compute-0 pensive_bell[239515]: 167 167
Feb 01 15:07:05 compute-0 systemd[1]: libpod-8f9566007761adc86963893a15b650dc8435351ee82fc783328f1a4dcd5b5b44.scope: Deactivated successfully.
Feb 01 15:07:05 compute-0 podman[239500]: 2026-02-01 15:07:05.331252225 +0000 UTC m=+0.091681602 container attach 8f9566007761adc86963893a15b650dc8435351ee82fc783328f1a4dcd5b5b44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 01 15:07:05 compute-0 podman[239500]: 2026-02-01 15:07:05.331721559 +0000 UTC m=+0.092150936 container died 8f9566007761adc86963893a15b650dc8435351ee82fc783328f1a4dcd5b5b44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 01 15:07:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-651c9af229070a734b8a9a07e6d86adb0445919b7cdb9356b9aa9be9904f67e0-merged.mount: Deactivated successfully.
Feb 01 15:07:05 compute-0 podman[239500]: 2026-02-01 15:07:05.257852306 +0000 UTC m=+0.018281743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:07:05 compute-0 podman[239500]: 2026-02-01 15:07:05.363107699 +0000 UTC m=+0.123537076 container remove 8f9566007761adc86963893a15b650dc8435351ee82fc783328f1a4dcd5b5b44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:07:05 compute-0 systemd[1]: libpod-conmon-8f9566007761adc86963893a15b650dc8435351ee82fc783328f1a4dcd5b5b44.scope: Deactivated successfully.
Feb 01 15:07:05 compute-0 podman[239541]: 2026-02-01 15:07:05.472356963 +0000 UTC m=+0.038964294 container create 66e5e7eaf68dd0dac405028287f01f8f1554a5485e3ce7366f7717c7a32eccb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mirzakhani, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 01 15:07:05 compute-0 systemd[1]: Started libpod-conmon-66e5e7eaf68dd0dac405028287f01f8f1554a5485e3ce7366f7717c7a32eccb5.scope.
Feb 01 15:07:05 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a2062d78384babf35fb53088ec3ba52dd38469674056844df313f18193df2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a2062d78384babf35fb53088ec3ba52dd38469674056844df313f18193df2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a2062d78384babf35fb53088ec3ba52dd38469674056844df313f18193df2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a2062d78384babf35fb53088ec3ba52dd38469674056844df313f18193df2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:07:05 compute-0 podman[239541]: 2026-02-01 15:07:05.456481958 +0000 UTC m=+0.023089309 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:07:05 compute-0 podman[239541]: 2026-02-01 15:07:05.558360296 +0000 UTC m=+0.124967647 container init 66e5e7eaf68dd0dac405028287f01f8f1554a5485e3ce7366f7717c7a32eccb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:07:05 compute-0 podman[239541]: 2026-02-01 15:07:05.562653116 +0000 UTC m=+0.129260477 container start 66e5e7eaf68dd0dac405028287f01f8f1554a5485e3ce7366f7717c7a32eccb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mirzakhani, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:07:05 compute-0 podman[239541]: 2026-02-01 15:07:05.56599599 +0000 UTC m=+0.132603411 container attach 66e5e7eaf68dd0dac405028287f01f8f1554a5485e3ce7366f7717c7a32eccb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]: {
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:     "0": [
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:         {
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "devices": [
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "/dev/loop3"
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             ],
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "lv_name": "ceph_lv0",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "lv_size": "21470642176",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "name": "ceph_lv0",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "tags": {
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.cluster_name": "ceph",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.crush_device_class": "",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.encrypted": "0",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.objectstore": "bluestore",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.osd_id": "0",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.type": "block",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.vdo": "0",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.with_tpm": "0"
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             },
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "type": "block",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "vg_name": "ceph_vg0"
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:         }
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:     ],
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:     "1": [
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:         {
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "devices": [
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "/dev/loop4"
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             ],
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "lv_name": "ceph_lv1",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "lv_size": "21470642176",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "name": "ceph_lv1",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "tags": {
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.cluster_name": "ceph",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.crush_device_class": "",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.encrypted": "0",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.objectstore": "bluestore",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.osd_id": "1",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.type": "block",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.vdo": "0",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.with_tpm": "0"
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             },
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "type": "block",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "vg_name": "ceph_vg1"
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:         }
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:     ],
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:     "2": [
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:         {
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "devices": [
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "/dev/loop5"
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             ],
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "lv_name": "ceph_lv2",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "lv_size": "21470642176",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "name": "ceph_lv2",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "tags": {
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.cluster_name": "ceph",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.crush_device_class": "",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.encrypted": "0",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.objectstore": "bluestore",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.osd_id": "2",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.type": "block",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.vdo": "0",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:                 "ceph.with_tpm": "0"
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             },
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "type": "block",
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:             "vg_name": "ceph_vg2"
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:         }
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]:     ]
Feb 01 15:07:05 compute-0 angry_mirzakhani[239558]: }
Feb 01 15:07:05 compute-0 systemd[1]: libpod-66e5e7eaf68dd0dac405028287f01f8f1554a5485e3ce7366f7717c7a32eccb5.scope: Deactivated successfully.
Feb 01 15:07:05 compute-0 podman[239541]: 2026-02-01 15:07:05.818171043 +0000 UTC m=+0.384778414 container died 66e5e7eaf68dd0dac405028287f01f8f1554a5485e3ce7366f7717c7a32eccb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mirzakhani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 01 15:07:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-47a2062d78384babf35fb53088ec3ba52dd38469674056844df313f18193df2e-merged.mount: Deactivated successfully.
Feb 01 15:07:05 compute-0 podman[239541]: 2026-02-01 15:07:05.861704344 +0000 UTC m=+0.428311685 container remove 66e5e7eaf68dd0dac405028287f01f8f1554a5485e3ce7366f7717c7a32eccb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mirzakhani, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 01 15:07:05 compute-0 systemd[1]: libpod-conmon-66e5e7eaf68dd0dac405028287f01f8f1554a5485e3ce7366f7717c7a32eccb5.scope: Deactivated successfully.
Feb 01 15:07:05 compute-0 sudo[239462]: pam_unix(sudo:session): session closed for user root
Feb 01 15:07:05 compute-0 sudo[239579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:07:05 compute-0 sudo[239579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:07:05 compute-0 sudo[239579]: pam_unix(sudo:session): session closed for user root
Feb 01 15:07:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:07:06 compute-0 sudo[239604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 15:07:06 compute-0 sudo[239604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:07:06 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 01 15:07:06 compute-0 podman[239641]: 2026-02-01 15:07:06.295680697 +0000 UTC m=+0.048270835 container create 744627a74af973283a9fd6df80b2f62bbf95a6053497c5cffd7205f9c0bab536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_taussig, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Feb 01 15:07:06 compute-0 systemd[1]: Started libpod-conmon-744627a74af973283a9fd6df80b2f62bbf95a6053497c5cffd7205f9c0bab536.scope.
Feb 01 15:07:06 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:07:06 compute-0 podman[239641]: 2026-02-01 15:07:06.355972128 +0000 UTC m=+0.108562346 container init 744627a74af973283a9fd6df80b2f62bbf95a6053497c5cffd7205f9c0bab536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 01 15:07:06 compute-0 podman[239641]: 2026-02-01 15:07:06.361984557 +0000 UTC m=+0.114574685 container start 744627a74af973283a9fd6df80b2f62bbf95a6053497c5cffd7205f9c0bab536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 01 15:07:06 compute-0 systemd[1]: libpod-744627a74af973283a9fd6df80b2f62bbf95a6053497c5cffd7205f9c0bab536.scope: Deactivated successfully.
Feb 01 15:07:06 compute-0 podman[239641]: 2026-02-01 15:07:06.365624119 +0000 UTC m=+0.118214337 container attach 744627a74af973283a9fd6df80b2f62bbf95a6053497c5cffd7205f9c0bab536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 01 15:07:06 compute-0 nervous_taussig[239657]: 167 167
Feb 01 15:07:06 compute-0 conmon[239657]: conmon 744627a74af973283a9f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-744627a74af973283a9fd6df80b2f62bbf95a6053497c5cffd7205f9c0bab536.scope/container/memory.events
Feb 01 15:07:06 compute-0 podman[239641]: 2026-02-01 15:07:06.366452362 +0000 UTC m=+0.119042520 container died 744627a74af973283a9fd6df80b2f62bbf95a6053497c5cffd7205f9c0bab536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:07:06 compute-0 podman[239641]: 2026-02-01 15:07:06.279921415 +0000 UTC m=+0.032511593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:07:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-3148389af57955aee4f77f4b6301a9e1052bb033b92f0e8c5564aad0d9452867-merged.mount: Deactivated successfully.
Feb 01 15:07:06 compute-0 podman[239641]: 2026-02-01 15:07:06.399007275 +0000 UTC m=+0.151597433 container remove 744627a74af973283a9fd6df80b2f62bbf95a6053497c5cffd7205f9c0bab536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_taussig, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:07:06 compute-0 systemd[1]: libpod-conmon-744627a74af973283a9fd6df80b2f62bbf95a6053497c5cffd7205f9c0bab536.scope: Deactivated successfully.
Feb 01 15:07:06 compute-0 podman[239681]: 2026-02-01 15:07:06.559130217 +0000 UTC m=+0.041756873 container create aabecf266e824c681366173d0fede0b2b95e3d5588db9f4a3882a3f3af86c9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 01 15:07:06 compute-0 systemd[1]: Started libpod-conmon-aabecf266e824c681366173d0fede0b2b95e3d5588db9f4a3882a3f3af86c9a2.scope.
Feb 01 15:07:06 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3695cd5475451543f85870988206d9a1b488b329aa39b7fedd994cebced08287/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3695cd5475451543f85870988206d9a1b488b329aa39b7fedd994cebced08287/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3695cd5475451543f85870988206d9a1b488b329aa39b7fedd994cebced08287/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3695cd5475451543f85870988206d9a1b488b329aa39b7fedd994cebced08287/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:07:06 compute-0 podman[239681]: 2026-02-01 15:07:06.539830555 +0000 UTC m=+0.022457241 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:07:06 compute-0 podman[239681]: 2026-02-01 15:07:06.646827937 +0000 UTC m=+0.129454603 container init aabecf266e824c681366173d0fede0b2b95e3d5588db9f4a3882a3f3af86c9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yalow, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Feb 01 15:07:06 compute-0 podman[239681]: 2026-02-01 15:07:06.651998692 +0000 UTC m=+0.134625338 container start aabecf266e824c681366173d0fede0b2b95e3d5588db9f4a3882a3f3af86c9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yalow, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:07:06 compute-0 podman[239681]: 2026-02-01 15:07:06.655183581 +0000 UTC m=+0.137810227 container attach aabecf266e824c681366173d0fede0b2b95e3d5588db9f4a3882a3f3af86c9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 01 15:07:07 compute-0 lvm[239776]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:07:07 compute-0 lvm[239775]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:07:07 compute-0 lvm[239775]: VG ceph_vg0 finished
Feb 01 15:07:07 compute-0 lvm[239776]: VG ceph_vg1 finished
Feb 01 15:07:07 compute-0 lvm[239778]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:07:07 compute-0 lvm[239778]: VG ceph_vg2 finished
Feb 01 15:07:07 compute-0 ceph-mon[75179]: pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 01 15:07:07 compute-0 hardcore_yalow[239697]: {}
Feb 01 15:07:07 compute-0 systemd[1]: libpod-aabecf266e824c681366173d0fede0b2b95e3d5588db9f4a3882a3f3af86c9a2.scope: Deactivated successfully.
Feb 01 15:07:07 compute-0 podman[239681]: 2026-02-01 15:07:07.385462926 +0000 UTC m=+0.868089572 container died aabecf266e824c681366173d0fede0b2b95e3d5588db9f4a3882a3f3af86c9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yalow, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 01 15:07:07 compute-0 systemd[1]: libpod-aabecf266e824c681366173d0fede0b2b95e3d5588db9f4a3882a3f3af86c9a2.scope: Consumed 1.011s CPU time.
Feb 01 15:07:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-3695cd5475451543f85870988206d9a1b488b329aa39b7fedd994cebced08287-merged.mount: Deactivated successfully.
Feb 01 15:07:07 compute-0 podman[239681]: 2026-02-01 15:07:07.422263928 +0000 UTC m=+0.904890574 container remove aabecf266e824c681366173d0fede0b2b95e3d5588db9f4a3882a3f3af86c9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:07:07 compute-0 systemd[1]: libpod-conmon-aabecf266e824c681366173d0fede0b2b95e3d5588db9f4a3882a3f3af86c9a2.scope: Deactivated successfully.
Feb 01 15:07:07 compute-0 sudo[239604]: pam_unix(sudo:session): session closed for user root
Feb 01 15:07:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:07:07 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:07:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:07:07 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:07:07 compute-0 sudo[239793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 15:07:07 compute-0 sudo[239793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:07:07 compute-0 sudo[239793]: pam_unix(sudo:session): session closed for user root
Feb 01 15:07:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:07:07.801 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:07:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:07:07.802 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:07:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:07:07.802 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:07:08 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 01 15:07:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:07:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:07:09 compute-0 ceph-mon[75179]: pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 01 15:07:10 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 01 15:07:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:07:11 compute-0 ceph-mon[75179]: pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 01 15:07:12 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 01 15:07:12 compute-0 podman[239818]: 2026-02-01 15:07:12.988374726 +0000 UTC m=+0.056978438 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127)
Feb 01 15:07:13 compute-0 podman[239819]: 2026-02-01 15:07:13.013067339 +0000 UTC m=+0.086410614 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 01 15:07:13 compute-0 ceph-mon[75179]: pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 01 15:07:14 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:15 compute-0 ceph-mon[75179]: pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:07:16 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:17 compute-0 ceph-mon[75179]: pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:07:17
Feb 01 15:07:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:07:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:07:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'backups', 'images', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'vms', 'default.rgw.meta', 'default.rgw.log']
Feb 01 15:07:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:07:18 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:07:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:07:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:07:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:07:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:07:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:07:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:07:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:07:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:07:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:07:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:07:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:07:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:07:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:07:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:07:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:07:19 compute-0 ceph-mon[75179]: pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:20 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:07:21 compute-0 ceph-mon[75179]: pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:21 compute-0 sshd-session[239861]: Invalid user ubuntu from 80.94.92.171 port 52668
Feb 01 15:07:21 compute-0 sshd-session[239861]: Connection closed by invalid user ubuntu 80.94.92.171 port 52668 [preauth]
Feb 01 15:07:22 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:23 compute-0 ceph-mon[75179]: pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:24 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:25 compute-0 ceph-mon[75179]: pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:07:26 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:27 compute-0 ceph-mon[75179]: pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:07:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:07:29 compute-0 ceph-mon[75179]: pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:30 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:07:31 compute-0 ceph-mon[75179]: pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:32 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:33 compute-0 ceph-mon[75179]: pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:34 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:35 compute-0 ceph-mon[75179]: pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:07:36 compute-0 nova_compute[238794]: 2026-02-01 15:07:36.024 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.026282) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958456026356, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1871, "num_deletes": 251, "total_data_size": 3203739, "memory_usage": 3250424, "flush_reason": "Manual Compaction"}
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958456036383, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1802269, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11744, "largest_seqno": 13614, "table_properties": {"data_size": 1796203, "index_size": 3077, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15257, "raw_average_key_size": 20, "raw_value_size": 1782777, "raw_average_value_size": 2358, "num_data_blocks": 142, "num_entries": 756, "num_filter_entries": 756, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769958242, "oldest_key_time": 1769958242, "file_creation_time": 1769958456, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 10123 microseconds, and 4946 cpu microseconds.
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.036433) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1802269 bytes OK
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.036455) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.038015) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.038038) EVENT_LOG_v1 {"time_micros": 1769958456038031, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.038059) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3195839, prev total WAL file size 3195839, number of live WAL files 2.
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.038883) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1760KB)], [29(7862KB)]
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958456038959, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9853173, "oldest_snapshot_seqno": -1}
Feb 01 15:07:36 compute-0 nova_compute[238794]: 2026-02-01 15:07:36.052 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4044 keys, 7842936 bytes, temperature: kUnknown
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958456087968, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7842936, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7813932, "index_size": 17822, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 96088, "raw_average_key_size": 23, "raw_value_size": 7739041, "raw_average_value_size": 1913, "num_data_blocks": 777, "num_entries": 4044, "num_filter_entries": 4044, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769958456, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.088385) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7842936 bytes
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.089757) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.6 rd, 159.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.7 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(9.8) write-amplify(4.4) OK, records in: 4457, records dropped: 413 output_compression: NoCompression
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.089789) EVENT_LOG_v1 {"time_micros": 1769958456089773, "job": 12, "event": "compaction_finished", "compaction_time_micros": 49117, "compaction_time_cpu_micros": 19488, "output_level": 6, "num_output_files": 1, "total_output_size": 7842936, "num_input_records": 4457, "num_output_records": 4044, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958456090213, "job": 12, "event": "table_file_deletion", "file_number": 31}
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958456091716, "job": 12, "event": "table_file_deletion", "file_number": 29}
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.038783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.091778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.091786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.091790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.091793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:07:36 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:07:36.091796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:07:36 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:37 compute-0 ceph-mon[75179]: pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:38 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:39 compute-0 ceph-mon[75179]: pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:40 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:40 compute-0 nova_compute[238794]: 2026-02-01 15:07:40.322 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:07:40 compute-0 nova_compute[238794]: 2026-02-01 15:07:40.322 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:07:40 compute-0 nova_compute[238794]: 2026-02-01 15:07:40.322 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 01 15:07:40 compute-0 nova_compute[238794]: 2026-02-01 15:07:40.322 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 01 15:07:40 compute-0 nova_compute[238794]: 2026-02-01 15:07:40.351 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 01 15:07:40 compute-0 nova_compute[238794]: 2026-02-01 15:07:40.352 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:07:40 compute-0 nova_compute[238794]: 2026-02-01 15:07:40.352 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:07:40 compute-0 nova_compute[238794]: 2026-02-01 15:07:40.352 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:07:40 compute-0 nova_compute[238794]: 2026-02-01 15:07:40.352 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:07:40 compute-0 nova_compute[238794]: 2026-02-01 15:07:40.353 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:07:40 compute-0 nova_compute[238794]: 2026-02-01 15:07:40.353 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:07:40 compute-0 nova_compute[238794]: 2026-02-01 15:07:40.353 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 01 15:07:40 compute-0 nova_compute[238794]: 2026-02-01 15:07:40.353 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:07:40 compute-0 nova_compute[238794]: 2026-02-01 15:07:40.417 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:07:40 compute-0 nova_compute[238794]: 2026-02-01 15:07:40.417 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:07:40 compute-0 nova_compute[238794]: 2026-02-01 15:07:40.417 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:07:40 compute-0 nova_compute[238794]: 2026-02-01 15:07:40.417 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 01 15:07:40 compute-0 nova_compute[238794]: 2026-02-01 15:07:40.418 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:07:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:07:40 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1587570910' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:07:40 compute-0 nova_compute[238794]: 2026-02-01 15:07:40.895 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:07:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:07:41 compute-0 nova_compute[238794]: 2026-02-01 15:07:41.098 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 01 15:07:41 compute-0 nova_compute[238794]: 2026-02-01 15:07:41.099 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5132MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 01 15:07:41 compute-0 nova_compute[238794]: 2026-02-01 15:07:41.099 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:07:41 compute-0 nova_compute[238794]: 2026-02-01 15:07:41.099 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:07:41 compute-0 ceph-mon[75179]: pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:41 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1587570910' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:07:41 compute-0 nova_compute[238794]: 2026-02-01 15:07:41.265 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 01 15:07:41 compute-0 nova_compute[238794]: 2026-02-01 15:07:41.265 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 01 15:07:41 compute-0 nova_compute[238794]: 2026-02-01 15:07:41.298 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:07:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:07:41 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3263713135' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:07:41 compute-0 nova_compute[238794]: 2026-02-01 15:07:41.834 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:07:41 compute-0 nova_compute[238794]: 2026-02-01 15:07:41.838 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 01 15:07:41 compute-0 nova_compute[238794]: 2026-02-01 15:07:41.858 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 01 15:07:41 compute-0 nova_compute[238794]: 2026-02-01 15:07:41.859 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 01 15:07:41 compute-0 nova_compute[238794]: 2026-02-01 15:07:41.860 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:07:42 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:42 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3263713135' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:07:43 compute-0 ceph-mon[75179]: pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:43 compute-0 podman[239907]: 2026-02-01 15:07:43.984963744 +0000 UTC m=+0.059286574 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 01 15:07:44 compute-0 podman[239908]: 2026-02-01 15:07:44.038359641 +0000 UTC m=+0.112960849 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 01 15:07:44 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:45 compute-0 ceph-mon[75179]: pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:07:46 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:47 compute-0 ceph-mon[75179]: pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:07:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:07:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:07:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:07:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:07:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:07:49 compute-0 ceph-mon[75179]: pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Feb 01 15:07:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3497808587' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Feb 01 15:07:50 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14340 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb 01 15:07:50 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb 01 15:07:50 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb 01 15:07:50 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:50 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3497808587' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Feb 01 15:07:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:07:51 compute-0 ceph-mon[75179]: from='client.14340 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb 01 15:07:51 compute-0 ceph-mon[75179]: pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:52 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:53 compute-0 ceph-mon[75179]: pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:54 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:55 compute-0 ceph-mon[75179]: pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:07:56 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:57 compute-0 ceph-mon[75179]: pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:58 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:07:59 compute-0 ceph-mon[75179]: pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:00 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:08:01 compute-0 ceph-mon[75179]: pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:02 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:03 compute-0 ceph-mon[75179]: pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:04 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:05 compute-0 ceph-mon[75179]: pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:08:06 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Feb 01 15:08:06 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Feb 01 15:08:06 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb 01 15:08:06 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb 01 15:08:06 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb 01 15:08:07 compute-0 ceph-mon[75179]: pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:07 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Feb 01 15:08:07 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb 01 15:08:07 compute-0 sudo[239952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:08:07 compute-0 sudo[239952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:08:07 compute-0 sudo[239952]: pam_unix(sudo:session): session closed for user root
Feb 01 15:08:07 compute-0 sudo[239977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 15:08:07 compute-0 sudo[239977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:08:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:08:07.802 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:08:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:08:07.803 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:08:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:08:07.803 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:08:08 compute-0 sudo[239977]: pam_unix(sudo:session): session closed for user root
Feb 01 15:08:08 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:08 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:08:08 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:08:08 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 15:08:08 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:08:08 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 15:08:08 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:08:08 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 15:08:08 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:08:08 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 15:08:08 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:08:08 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:08:08 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:08:08 compute-0 sudo[240033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:08:08 compute-0 sudo[240033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:08:08 compute-0 sudo[240033]: pam_unix(sudo:session): session closed for user root
Feb 01 15:08:08 compute-0 sudo[240058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 15:08:08 compute-0 sudo[240058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:08:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:08:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:08:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:08:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:08:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:08:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:08:08 compute-0 podman[240096]: 2026-02-01 15:08:08.556505939 +0000 UTC m=+0.046478465 container create 5a0b69d9c4317a76305b929969843337e9743a524981c57282e606ef7694b355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 01 15:08:08 compute-0 systemd[1]: Started libpod-conmon-5a0b69d9c4317a76305b929969843337e9743a524981c57282e606ef7694b355.scope.
Feb 01 15:08:08 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:08:08 compute-0 podman[240096]: 2026-02-01 15:08:08.628473858 +0000 UTC m=+0.118446434 container init 5a0b69d9c4317a76305b929969843337e9743a524981c57282e606ef7694b355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mayer, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:08:08 compute-0 podman[240096]: 2026-02-01 15:08:08.540611633 +0000 UTC m=+0.030584209 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:08:08 compute-0 podman[240096]: 2026-02-01 15:08:08.636846973 +0000 UTC m=+0.126819499 container start 5a0b69d9c4317a76305b929969843337e9743a524981c57282e606ef7694b355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mayer, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:08:08 compute-0 podman[240096]: 2026-02-01 15:08:08.640410953 +0000 UTC m=+0.130383489 container attach 5a0b69d9c4317a76305b929969843337e9743a524981c57282e606ef7694b355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mayer, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:08:08 compute-0 epic_mayer[240112]: 167 167
Feb 01 15:08:08 compute-0 systemd[1]: libpod-5a0b69d9c4317a76305b929969843337e9743a524981c57282e606ef7694b355.scope: Deactivated successfully.
Feb 01 15:08:08 compute-0 conmon[240112]: conmon 5a0b69d9c4317a76305b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a0b69d9c4317a76305b929969843337e9743a524981c57282e606ef7694b355.scope/container/memory.events
Feb 01 15:08:08 compute-0 podman[240096]: 2026-02-01 15:08:08.644160348 +0000 UTC m=+0.134132924 container died 5a0b69d9c4317a76305b929969843337e9743a524981c57282e606ef7694b355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mayer, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:08:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-50b44a5125852e65fabf5e833ac7e78cc6799ca58df5d0b78d9abcd6958b9975-merged.mount: Deactivated successfully.
Feb 01 15:08:08 compute-0 podman[240096]: 2026-02-01 15:08:08.679357035 +0000 UTC m=+0.169329571 container remove 5a0b69d9c4317a76305b929969843337e9743a524981c57282e606ef7694b355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mayer, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 01 15:08:08 compute-0 systemd[1]: libpod-conmon-5a0b69d9c4317a76305b929969843337e9743a524981c57282e606ef7694b355.scope: Deactivated successfully.
Feb 01 15:08:08 compute-0 podman[240135]: 2026-02-01 15:08:08.805821663 +0000 UTC m=+0.033967194 container create 148706f96b960f2ab0f4d293c6dafa4e9aef457f2b90da4a28e2aeb1697375c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:08:08 compute-0 systemd[1]: Started libpod-conmon-148706f96b960f2ab0f4d293c6dafa4e9aef457f2b90da4a28e2aeb1697375c6.scope.
Feb 01 15:08:08 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ad23ba75b5343420a4a222a00a4dd65633f8730ac2cd70b165464f280f1894/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ad23ba75b5343420a4a222a00a4dd65633f8730ac2cd70b165464f280f1894/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ad23ba75b5343420a4a222a00a4dd65633f8730ac2cd70b165464f280f1894/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ad23ba75b5343420a4a222a00a4dd65633f8730ac2cd70b165464f280f1894/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ad23ba75b5343420a4a222a00a4dd65633f8730ac2cd70b165464f280f1894/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 15:08:08 compute-0 podman[240135]: 2026-02-01 15:08:08.789257538 +0000 UTC m=+0.017403109 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:08:08 compute-0 podman[240135]: 2026-02-01 15:08:08.899418988 +0000 UTC m=+0.127564579 container init 148706f96b960f2ab0f4d293c6dafa4e9aef457f2b90da4a28e2aeb1697375c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 01 15:08:08 compute-0 podman[240135]: 2026-02-01 15:08:08.90840581 +0000 UTC m=+0.136551361 container start 148706f96b960f2ab0f4d293c6dafa4e9aef457f2b90da4a28e2aeb1697375c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ramanujan, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 01 15:08:08 compute-0 podman[240135]: 2026-02-01 15:08:08.911745874 +0000 UTC m=+0.139891415 container attach 148706f96b960f2ab0f4d293c6dafa4e9aef457f2b90da4a28e2aeb1697375c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:08:09 compute-0 inspiring_ramanujan[240151]: --> passed data devices: 0 physical, 3 LVM
Feb 01 15:08:09 compute-0 inspiring_ramanujan[240151]: --> All data devices are unavailable
Feb 01 15:08:09 compute-0 systemd[1]: libpod-148706f96b960f2ab0f4d293c6dafa4e9aef457f2b90da4a28e2aeb1697375c6.scope: Deactivated successfully.
Feb 01 15:08:09 compute-0 conmon[240151]: conmon 148706f96b960f2ab0f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-148706f96b960f2ab0f4d293c6dafa4e9aef457f2b90da4a28e2aeb1697375c6.scope/container/memory.events
Feb 01 15:08:09 compute-0 podman[240135]: 2026-02-01 15:08:09.343266438 +0000 UTC m=+0.571411979 container died 148706f96b960f2ab0f4d293c6dafa4e9aef457f2b90da4a28e2aeb1697375c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:08:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-59ad23ba75b5343420a4a222a00a4dd65633f8730ac2cd70b165464f280f1894-merged.mount: Deactivated successfully.
Feb 01 15:08:09 compute-0 podman[240135]: 2026-02-01 15:08:09.384929167 +0000 UTC m=+0.613074718 container remove 148706f96b960f2ab0f4d293c6dafa4e9aef457f2b90da4a28e2aeb1697375c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ramanujan, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:08:09 compute-0 systemd[1]: libpod-conmon-148706f96b960f2ab0f4d293c6dafa4e9aef457f2b90da4a28e2aeb1697375c6.scope: Deactivated successfully.
Feb 01 15:08:09 compute-0 sudo[240058]: pam_unix(sudo:session): session closed for user root
Feb 01 15:08:09 compute-0 ceph-mon[75179]: pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:09 compute-0 sudo[240183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:08:09 compute-0 sudo[240183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:08:09 compute-0 sudo[240183]: pam_unix(sudo:session): session closed for user root
Feb 01 15:08:09 compute-0 sudo[240208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 15:08:09 compute-0 sudo[240208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:08:09 compute-0 podman[240245]: 2026-02-01 15:08:09.791270515 +0000 UTC m=+0.040482007 container create a90ac9ddf86b2233618a9284ec6bf799e6a9d100bd1baffe1b502d28178562a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 01 15:08:09 compute-0 systemd[1]: Started libpod-conmon-a90ac9ddf86b2233618a9284ec6bf799e6a9d100bd1baffe1b502d28178562a0.scope.
Feb 01 15:08:09 compute-0 podman[240245]: 2026-02-01 15:08:09.769949607 +0000 UTC m=+0.019161179 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:08:09 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:08:09 compute-0 podman[240245]: 2026-02-01 15:08:09.88272994 +0000 UTC m=+0.131941452 container init a90ac9ddf86b2233618a9284ec6bf799e6a9d100bd1baffe1b502d28178562a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:08:09 compute-0 podman[240245]: 2026-02-01 15:08:09.88985167 +0000 UTC m=+0.139063162 container start a90ac9ddf86b2233618a9284ec6bf799e6a9d100bd1baffe1b502d28178562a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:08:09 compute-0 podman[240245]: 2026-02-01 15:08:09.89307584 +0000 UTC m=+0.142287342 container attach a90ac9ddf86b2233618a9284ec6bf799e6a9d100bd1baffe1b502d28178562a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 01 15:08:09 compute-0 crazy_yonath[240261]: 167 167
Feb 01 15:08:09 compute-0 systemd[1]: libpod-a90ac9ddf86b2233618a9284ec6bf799e6a9d100bd1baffe1b502d28178562a0.scope: Deactivated successfully.
Feb 01 15:08:09 compute-0 podman[240245]: 2026-02-01 15:08:09.896911298 +0000 UTC m=+0.146122820 container died a90ac9ddf86b2233618a9284ec6bf799e6a9d100bd1baffe1b502d28178562a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:08:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b4606808dd8ac03d7ff967ee016aef21301ef626310522739cec18bb0a34256-merged.mount: Deactivated successfully.
Feb 01 15:08:09 compute-0 podman[240245]: 2026-02-01 15:08:09.981856491 +0000 UTC m=+0.231068013 container remove a90ac9ddf86b2233618a9284ec6bf799e6a9d100bd1baffe1b502d28178562a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_yonath, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:08:09 compute-0 systemd[1]: libpod-conmon-a90ac9ddf86b2233618a9284ec6bf799e6a9d100bd1baffe1b502d28178562a0.scope: Deactivated successfully.
Feb 01 15:08:10 compute-0 podman[240287]: 2026-02-01 15:08:10.144916595 +0000 UTC m=+0.046789654 container create a79c584b11412d59bb4588d4c370a1727b7a537ca8c673a2cf2f7af4ec676661 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 01 15:08:10 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:10 compute-0 systemd[1]: Started libpod-conmon-a79c584b11412d59bb4588d4c370a1727b7a537ca8c673a2cf2f7af4ec676661.scope.
Feb 01 15:08:10 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5a30f1969e0cc033215697d1b215a65c68475c4580a7013861bdc65b62865b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5a30f1969e0cc033215697d1b215a65c68475c4580a7013861bdc65b62865b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5a30f1969e0cc033215697d1b215a65c68475c4580a7013861bdc65b62865b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5a30f1969e0cc033215697d1b215a65c68475c4580a7013861bdc65b62865b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:08:10 compute-0 podman[240287]: 2026-02-01 15:08:10.210402341 +0000 UTC m=+0.112275410 container init a79c584b11412d59bb4588d4c370a1727b7a537ca8c673a2cf2f7af4ec676661 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mendeleev, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:08:10 compute-0 podman[240287]: 2026-02-01 15:08:10.215402511 +0000 UTC m=+0.117275570 container start a79c584b11412d59bb4588d4c370a1727b7a537ca8c673a2cf2f7af4ec676661 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mendeleev, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 01 15:08:10 compute-0 podman[240287]: 2026-02-01 15:08:10.219479125 +0000 UTC m=+0.121352174 container attach a79c584b11412d59bb4588d4c370a1727b7a537ca8c673a2cf2f7af4ec676661 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 01 15:08:10 compute-0 podman[240287]: 2026-02-01 15:08:10.127865256 +0000 UTC m=+0.029738295 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]: {
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:     "0": [
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:         {
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "devices": [
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "/dev/loop3"
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             ],
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "lv_name": "ceph_lv0",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "lv_size": "21470642176",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "name": "ceph_lv0",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "tags": {
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.cluster_name": "ceph",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.crush_device_class": "",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.encrypted": "0",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.objectstore": "bluestore",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.osd_id": "0",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.type": "block",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.vdo": "0",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.with_tpm": "0"
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             },
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "type": "block",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "vg_name": "ceph_vg0"
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:         }
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:     ],
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:     "1": [
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:         {
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "devices": [
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "/dev/loop4"
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             ],
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "lv_name": "ceph_lv1",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "lv_size": "21470642176",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "name": "ceph_lv1",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "tags": {
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.cluster_name": "ceph",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.crush_device_class": "",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.encrypted": "0",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.objectstore": "bluestore",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.osd_id": "1",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.type": "block",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.vdo": "0",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.with_tpm": "0"
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             },
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "type": "block",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "vg_name": "ceph_vg1"
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:         }
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:     ],
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:     "2": [
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:         {
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "devices": [
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "/dev/loop5"
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             ],
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "lv_name": "ceph_lv2",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "lv_size": "21470642176",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "name": "ceph_lv2",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "tags": {
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.cluster_name": "ceph",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.crush_device_class": "",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.encrypted": "0",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.objectstore": "bluestore",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.osd_id": "2",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.type": "block",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.vdo": "0",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:                 "ceph.with_tpm": "0"
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             },
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "type": "block",
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:             "vg_name": "ceph_vg2"
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:         }
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]:     ]
Feb 01 15:08:10 compute-0 jovial_mendeleev[240304]: }
Feb 01 15:08:10 compute-0 systemd[1]: libpod-a79c584b11412d59bb4588d4c370a1727b7a537ca8c673a2cf2f7af4ec676661.scope: Deactivated successfully.
Feb 01 15:08:10 compute-0 podman[240287]: 2026-02-01 15:08:10.482488773 +0000 UTC m=+0.384361802 container died a79c584b11412d59bb4588d4c370a1727b7a537ca8c673a2cf2f7af4ec676661 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:08:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb5a30f1969e0cc033215697d1b215a65c68475c4580a7013861bdc65b62865b-merged.mount: Deactivated successfully.
Feb 01 15:08:10 compute-0 podman[240287]: 2026-02-01 15:08:10.524493701 +0000 UTC m=+0.426366730 container remove a79c584b11412d59bb4588d4c370a1727b7a537ca8c673a2cf2f7af4ec676661 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mendeleev, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:08:10 compute-0 systemd[1]: libpod-conmon-a79c584b11412d59bb4588d4c370a1727b7a537ca8c673a2cf2f7af4ec676661.scope: Deactivated successfully.
Feb 01 15:08:10 compute-0 sudo[240208]: pam_unix(sudo:session): session closed for user root
Feb 01 15:08:10 compute-0 sudo[240325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:08:10 compute-0 sudo[240325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:08:10 compute-0 sudo[240325]: pam_unix(sudo:session): session closed for user root
Feb 01 15:08:10 compute-0 sudo[240350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 15:08:10 compute-0 sudo[240350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:08:10 compute-0 podman[240386]: 2026-02-01 15:08:10.954584875 +0000 UTC m=+0.041316270 container create 3929e90c0495b5015adc6293a1d55c19dec5a1ddc1fa8c7ef0a2fddd3e3af886 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 01 15:08:10 compute-0 systemd[1]: Started libpod-conmon-3929e90c0495b5015adc6293a1d55c19dec5a1ddc1fa8c7ef0a2fddd3e3af886.scope.
Feb 01 15:08:11 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:08:11 compute-0 podman[240386]: 2026-02-01 15:08:11.027062948 +0000 UTC m=+0.113794363 container init 3929e90c0495b5015adc6293a1d55c19dec5a1ddc1fa8c7ef0a2fddd3e3af886 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 01 15:08:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:08:11 compute-0 podman[240386]: 2026-02-01 15:08:10.934572574 +0000 UTC m=+0.021303959 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:08:11 compute-0 podman[240386]: 2026-02-01 15:08:11.037337166 +0000 UTC m=+0.124068531 container start 3929e90c0495b5015adc6293a1d55c19dec5a1ddc1fa8c7ef0a2fddd3e3af886 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_engelbart, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:08:11 compute-0 friendly_engelbart[240402]: 167 167
Feb 01 15:08:11 compute-0 systemd[1]: libpod-3929e90c0495b5015adc6293a1d55c19dec5a1ddc1fa8c7ef0a2fddd3e3af886.scope: Deactivated successfully.
Feb 01 15:08:11 compute-0 podman[240386]: 2026-02-01 15:08:11.041577955 +0000 UTC m=+0.128309320 container attach 3929e90c0495b5015adc6293a1d55c19dec5a1ddc1fa8c7ef0a2fddd3e3af886 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_engelbart, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 01 15:08:11 compute-0 podman[240386]: 2026-02-01 15:08:11.042027158 +0000 UTC m=+0.128758533 container died 3929e90c0495b5015adc6293a1d55c19dec5a1ddc1fa8c7ef0a2fddd3e3af886 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_engelbart, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:08:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-844b428d7218f8032635d9de88e00a8d7004080e89e32bd85ef9e06a34eed583-merged.mount: Deactivated successfully.
Feb 01 15:08:11 compute-0 podman[240386]: 2026-02-01 15:08:11.07847434 +0000 UTC m=+0.165205715 container remove 3929e90c0495b5015adc6293a1d55c19dec5a1ddc1fa8c7ef0a2fddd3e3af886 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_engelbart, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 01 15:08:11 compute-0 systemd[1]: libpod-conmon-3929e90c0495b5015adc6293a1d55c19dec5a1ddc1fa8c7ef0a2fddd3e3af886.scope: Deactivated successfully.
Feb 01 15:08:11 compute-0 podman[240427]: 2026-02-01 15:08:11.269952211 +0000 UTC m=+0.079087769 container create d2b8e15ed2445e9ffb9dcbcc13619a1167a7fef20408439203909883b07ecb31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_borg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:08:11 compute-0 systemd[1]: Started libpod-conmon-d2b8e15ed2445e9ffb9dcbcc13619a1167a7fef20408439203909883b07ecb31.scope.
Feb 01 15:08:11 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2459120fc8833d03d70ca178871f23d1b91df347876bd693e358677bfce28a3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2459120fc8833d03d70ca178871f23d1b91df347876bd693e358677bfce28a3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2459120fc8833d03d70ca178871f23d1b91df347876bd693e358677bfce28a3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2459120fc8833d03d70ca178871f23d1b91df347876bd693e358677bfce28a3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:08:11 compute-0 podman[240427]: 2026-02-01 15:08:11.348612608 +0000 UTC m=+0.157748186 container init d2b8e15ed2445e9ffb9dcbcc13619a1167a7fef20408439203909883b07ecb31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_borg, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:08:11 compute-0 podman[240427]: 2026-02-01 15:08:11.257579424 +0000 UTC m=+0.066715002 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:08:11 compute-0 podman[240427]: 2026-02-01 15:08:11.356775316 +0000 UTC m=+0.165910884 container start d2b8e15ed2445e9ffb9dcbcc13619a1167a7fef20408439203909883b07ecb31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_borg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 01 15:08:11 compute-0 podman[240427]: 2026-02-01 15:08:11.359867253 +0000 UTC m=+0.169002831 container attach d2b8e15ed2445e9ffb9dcbcc13619a1167a7fef20408439203909883b07ecb31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 01 15:08:11 compute-0 ceph-mon[75179]: pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:11 compute-0 lvm[240522]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:08:11 compute-0 lvm[240521]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:08:11 compute-0 lvm[240521]: VG ceph_vg0 finished
Feb 01 15:08:11 compute-0 lvm[240522]: VG ceph_vg1 finished
Feb 01 15:08:12 compute-0 lvm[240524]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:08:12 compute-0 lvm[240524]: VG ceph_vg2 finished
Feb 01 15:08:12 compute-0 trusting_borg[240443]: {}
Feb 01 15:08:12 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:12 compute-0 systemd[1]: libpod-d2b8e15ed2445e9ffb9dcbcc13619a1167a7fef20408439203909883b07ecb31.scope: Deactivated successfully.
Feb 01 15:08:12 compute-0 systemd[1]: libpod-d2b8e15ed2445e9ffb9dcbcc13619a1167a7fef20408439203909883b07ecb31.scope: Consumed 1.111s CPU time.
Feb 01 15:08:12 compute-0 podman[240427]: 2026-02-01 15:08:12.154548364 +0000 UTC m=+0.963683932 container died d2b8e15ed2445e9ffb9dcbcc13619a1167a7fef20408439203909883b07ecb31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_borg, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Feb 01 15:08:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-2459120fc8833d03d70ca178871f23d1b91df347876bd693e358677bfce28a3b-merged.mount: Deactivated successfully.
Feb 01 15:08:12 compute-0 podman[240427]: 2026-02-01 15:08:12.19718325 +0000 UTC m=+1.006318808 container remove d2b8e15ed2445e9ffb9dcbcc13619a1167a7fef20408439203909883b07ecb31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_borg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:08:12 compute-0 systemd[1]: libpod-conmon-d2b8e15ed2445e9ffb9dcbcc13619a1167a7fef20408439203909883b07ecb31.scope: Deactivated successfully.
Feb 01 15:08:12 compute-0 sudo[240350]: pam_unix(sudo:session): session closed for user root
Feb 01 15:08:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:08:12 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:08:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:08:12 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:08:12 compute-0 sudo[240539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 15:08:12 compute-0 sudo[240539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:08:12 compute-0 sudo[240539]: pam_unix(sudo:session): session closed for user root
Feb 01 15:08:13 compute-0 ceph-mon[75179]: pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:08:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:08:14 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:14 compute-0 podman[240564]: 2026-02-01 15:08:14.957853228 +0000 UTC m=+0.048281146 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 01 15:08:15 compute-0 podman[240565]: 2026-02-01 15:08:15.029971751 +0000 UTC m=+0.112180698 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb 01 15:08:15 compute-0 ceph-mon[75179]: pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:08:16 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:17 compute-0 ceph-mon[75179]: pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:08:17
Feb 01 15:08:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:08:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:08:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['backups', 'default.rgw.log', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'vms', 'images', '.rgw.root']
Feb 01 15:08:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:08:18 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:08:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:08:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:08:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:08:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:08:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:08:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:08:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:08:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:08:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:08:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:08:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:08:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:08:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:08:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:08:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:08:19 compute-0 ceph-mon[75179]: pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:20 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:08:21 compute-0 ceph-mon[75179]: pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:22 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:23 compute-0 ceph-mon[75179]: pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:24 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:25 compute-0 ceph-mon[75179]: pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:08:26 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:27 compute-0 ceph-mon[75179]: pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:08:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:08:29 compute-0 ceph-mon[75179]: pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:30 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:08:31 compute-0 ceph-mon[75179]: pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:32 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:33 compute-0 ceph-mon[75179]: pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:34 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:35 compute-0 ceph-mon[75179]: pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:08:36 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:37 compute-0 ceph-mon[75179]: pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:38 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:39 compute-0 ceph-mon[75179]: pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:40 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:08:41 compute-0 nova_compute[238794]: 2026-02-01 15:08:41.853 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:08:41 compute-0 nova_compute[238794]: 2026-02-01 15:08:41.853 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:08:41 compute-0 nova_compute[238794]: 2026-02-01 15:08:41.879 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:08:41 compute-0 nova_compute[238794]: 2026-02-01 15:08:41.880 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:08:41 compute-0 nova_compute[238794]: 2026-02-01 15:08:41.880 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 01 15:08:41 compute-0 nova_compute[238794]: 2026-02-01 15:08:41.880 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:08:41 compute-0 nova_compute[238794]: 2026-02-01 15:08:41.912 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:08:41 compute-0 nova_compute[238794]: 2026-02-01 15:08:41.913 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:08:41 compute-0 nova_compute[238794]: 2026-02-01 15:08:41.913 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:08:41 compute-0 nova_compute[238794]: 2026-02-01 15:08:41.913 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 01 15:08:41 compute-0 nova_compute[238794]: 2026-02-01 15:08:41.913 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:08:41 compute-0 ceph-mon[75179]: pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:42 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:42 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:08:42 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/350715595' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:08:42 compute-0 nova_compute[238794]: 2026-02-01 15:08:42.416 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:08:42 compute-0 nova_compute[238794]: 2026-02-01 15:08:42.572 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 01 15:08:42 compute-0 nova_compute[238794]: 2026-02-01 15:08:42.573 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5120MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 01 15:08:42 compute-0 nova_compute[238794]: 2026-02-01 15:08:42.573 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:08:42 compute-0 nova_compute[238794]: 2026-02-01 15:08:42.573 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:08:42 compute-0 nova_compute[238794]: 2026-02-01 15:08:42.655 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 01 15:08:42 compute-0 nova_compute[238794]: 2026-02-01 15:08:42.655 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 01 15:08:42 compute-0 nova_compute[238794]: 2026-02-01 15:08:42.695 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:08:42 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/350715595' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:08:43 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:08:43 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3458587239' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:08:43 compute-0 nova_compute[238794]: 2026-02-01 15:08:43.192 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:08:43 compute-0 nova_compute[238794]: 2026-02-01 15:08:43.196 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 01 15:08:43 compute-0 nova_compute[238794]: 2026-02-01 15:08:43.225 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 01 15:08:43 compute-0 nova_compute[238794]: 2026-02-01 15:08:43.227 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 01 15:08:43 compute-0 nova_compute[238794]: 2026-02-01 15:08:43.227 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:08:43 compute-0 nova_compute[238794]: 2026-02-01 15:08:43.667 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:08:43 compute-0 nova_compute[238794]: 2026-02-01 15:08:43.668 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 01 15:08:43 compute-0 nova_compute[238794]: 2026-02-01 15:08:43.668 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 01 15:08:43 compute-0 nova_compute[238794]: 2026-02-01 15:08:43.691 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 01 15:08:43 compute-0 nova_compute[238794]: 2026-02-01 15:08:43.691 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:08:43 compute-0 nova_compute[238794]: 2026-02-01 15:08:43.692 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:08:43 compute-0 nova_compute[238794]: 2026-02-01 15:08:43.692 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:08:43 compute-0 nova_compute[238794]: 2026-02-01 15:08:43.692 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:08:43 compute-0 ceph-mon[75179]: pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:43 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3458587239' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:08:44 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:45 compute-0 podman[240651]: 2026-02-01 15:08:45.982487683 +0000 UTC m=+0.064291784 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Feb 01 15:08:46 compute-0 podman[240652]: 2026-02-01 15:08:46.003487242 +0000 UTC m=+0.089198113 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 01 15:08:46 compute-0 ceph-mon[75179]: pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:08:46 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:48 compute-0 ceph-mon[75179]: pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:08:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:08:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:08:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:08:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:08:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:08:49 compute-0 ceph-mon[75179]: pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:50 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 01 15:08:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/774348116' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:08:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 01 15:08:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/774348116' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:08:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:08:51 compute-0 ceph-mon[75179]: pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/774348116' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:08:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/774348116' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:08:52 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:53 compute-0 ceph-mon[75179]: pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:54 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:55 compute-0 ceph-mon[75179]: pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:08:56 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:56 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:08:56.878 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:64:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '76:0c:13:64:99:39'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 01 15:08:56 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:08:56.879 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 01 15:08:56 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:08:56.879 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 01 15:08:57 compute-0 ceph-mon[75179]: pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:58 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:08:59 compute-0 ceph-mon[75179]: pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:00 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:09:01 compute-0 ceph-mon[75179]: pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:02 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:03 compute-0 ceph-mon[75179]: pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:04 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:05 compute-0 ceph-mon[75179]: pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:09:06 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:07 compute-0 ceph-mon[75179]: pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:09:07.804 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:09:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:09:07.804 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:09:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:09:07.804 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:09:08 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:09 compute-0 ceph-mon[75179]: pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:10 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:09:11 compute-0 ceph-mon[75179]: pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:12 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:12 compute-0 sudo[240694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:09:12 compute-0 sudo[240694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:09:12 compute-0 sudo[240694]: pam_unix(sudo:session): session closed for user root
Feb 01 15:09:12 compute-0 sudo[240719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 15:09:12 compute-0 sudo[240719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:09:12 compute-0 sudo[240719]: pam_unix(sudo:session): session closed for user root
Feb 01 15:09:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:09:12 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:09:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 15:09:12 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:09:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 15:09:12 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:09:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 15:09:12 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:09:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 15:09:12 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:09:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:09:12 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:09:12 compute-0 sudo[240775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:09:12 compute-0 sudo[240775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:09:12 compute-0 sudo[240775]: pam_unix(sudo:session): session closed for user root
Feb 01 15:09:12 compute-0 sudo[240800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 15:09:12 compute-0 sudo[240800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:09:13 compute-0 podman[240837]: 2026-02-01 15:09:13.241885923 +0000 UTC m=+0.037618915 container create e2054249645289c7dd2f380003a4f30dcc34d94cc378b8f949f8378a1353745f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:09:13 compute-0 systemd[1]: Started libpod-conmon-e2054249645289c7dd2f380003a4f30dcc34d94cc378b8f949f8378a1353745f.scope.
Feb 01 15:09:13 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:09:13 compute-0 podman[240837]: 2026-02-01 15:09:13.306550695 +0000 UTC m=+0.102283687 container init e2054249645289c7dd2f380003a4f30dcc34d94cc378b8f949f8378a1353745f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_banzai, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 01 15:09:13 compute-0 podman[240837]: 2026-02-01 15:09:13.31103659 +0000 UTC m=+0.106769582 container start e2054249645289c7dd2f380003a4f30dcc34d94cc378b8f949f8378a1353745f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:09:13 compute-0 podman[240837]: 2026-02-01 15:09:13.31388209 +0000 UTC m=+0.109615102 container attach e2054249645289c7dd2f380003a4f30dcc34d94cc378b8f949f8378a1353745f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_banzai, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:09:13 compute-0 systemd[1]: libpod-e2054249645289c7dd2f380003a4f30dcc34d94cc378b8f949f8378a1353745f.scope: Deactivated successfully.
Feb 01 15:09:13 compute-0 stoic_banzai[240853]: 167 167
Feb 01 15:09:13 compute-0 conmon[240853]: conmon e2054249645289c7dd2f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2054249645289c7dd2f380003a4f30dcc34d94cc378b8f949f8378a1353745f.scope/container/memory.events
Feb 01 15:09:13 compute-0 podman[240837]: 2026-02-01 15:09:13.315486045 +0000 UTC m=+0.111219037 container died e2054249645289c7dd2f380003a4f30dcc34d94cc378b8f949f8378a1353745f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 01 15:09:13 compute-0 podman[240837]: 2026-02-01 15:09:13.227118459 +0000 UTC m=+0.022851451 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:09:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fed91d021305d3fc4b41599531131e80cd5b89a58f8a31e35bf13f72672c97b-merged.mount: Deactivated successfully.
Feb 01 15:09:13 compute-0 podman[240837]: 2026-02-01 15:09:13.349518888 +0000 UTC m=+0.145251880 container remove e2054249645289c7dd2f380003a4f30dcc34d94cc378b8f949f8378a1353745f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 01 15:09:13 compute-0 ceph-mon[75179]: pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:09:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:09:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:09:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:09:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:09:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:09:13 compute-0 systemd[1]: libpod-conmon-e2054249645289c7dd2f380003a4f30dcc34d94cc378b8f949f8378a1353745f.scope: Deactivated successfully.
Feb 01 15:09:13 compute-0 podman[240878]: 2026-02-01 15:09:13.448381408 +0000 UTC m=+0.028017306 container create 95686d5e56c9220687791acfea465874999fa842def2946bfa85bd4393ee9e7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 01 15:09:13 compute-0 systemd[1]: Started libpod-conmon-95686d5e56c9220687791acfea465874999fa842def2946bfa85bd4393ee9e7d.scope.
Feb 01 15:09:13 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:09:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29eadcb77279691eb4281d4799759f9433dcba264d7a6e21f5f176dde8cc4ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:09:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29eadcb77279691eb4281d4799759f9433dcba264d7a6e21f5f176dde8cc4ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:09:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29eadcb77279691eb4281d4799759f9433dcba264d7a6e21f5f176dde8cc4ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:09:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29eadcb77279691eb4281d4799759f9433dcba264d7a6e21f5f176dde8cc4ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:09:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29eadcb77279691eb4281d4799759f9433dcba264d7a6e21f5f176dde8cc4ef/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 15:09:13 compute-0 podman[240878]: 2026-02-01 15:09:13.505546 +0000 UTC m=+0.085181928 container init 95686d5e56c9220687791acfea465874999fa842def2946bfa85bd4393ee9e7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:09:13 compute-0 podman[240878]: 2026-02-01 15:09:13.510329474 +0000 UTC m=+0.089965372 container start 95686d5e56c9220687791acfea465874999fa842def2946bfa85bd4393ee9e7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:09:13 compute-0 podman[240878]: 2026-02-01 15:09:13.513176373 +0000 UTC m=+0.092812291 container attach 95686d5e56c9220687791acfea465874999fa842def2946bfa85bd4393ee9e7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 01 15:09:13 compute-0 podman[240878]: 2026-02-01 15:09:13.437146933 +0000 UTC m=+0.016782851 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:09:13 compute-0 stoic_montalcini[240894]: --> passed data devices: 0 physical, 3 LVM
Feb 01 15:09:13 compute-0 stoic_montalcini[240894]: --> All data devices are unavailable
Feb 01 15:09:13 compute-0 systemd[1]: libpod-95686d5e56c9220687791acfea465874999fa842def2946bfa85bd4393ee9e7d.scope: Deactivated successfully.
Feb 01 15:09:13 compute-0 podman[240878]: 2026-02-01 15:09:13.877002356 +0000 UTC m=+0.456638254 container died 95686d5e56c9220687791acfea465874999fa842def2946bfa85bd4393ee9e7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 01 15:09:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-f29eadcb77279691eb4281d4799759f9433dcba264d7a6e21f5f176dde8cc4ef-merged.mount: Deactivated successfully.
Feb 01 15:09:14 compute-0 podman[240878]: 2026-02-01 15:09:14.102134814 +0000 UTC m=+0.681770712 container remove 95686d5e56c9220687791acfea465874999fa842def2946bfa85bd4393ee9e7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 01 15:09:14 compute-0 systemd[1]: libpod-conmon-95686d5e56c9220687791acfea465874999fa842def2946bfa85bd4393ee9e7d.scope: Deactivated successfully.
Feb 01 15:09:14 compute-0 sudo[240800]: pam_unix(sudo:session): session closed for user root
Feb 01 15:09:14 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:14 compute-0 sudo[240926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:09:14 compute-0 sudo[240926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:09:14 compute-0 sudo[240926]: pam_unix(sudo:session): session closed for user root
Feb 01 15:09:14 compute-0 sudo[240951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 15:09:14 compute-0 sudo[240951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:09:14 compute-0 podman[240989]: 2026-02-01 15:09:14.457954042 +0000 UTC m=+0.037048909 container create c89bfc085270b69c0c92695a8f3b4935de174a54aac120a46d952589accab74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:09:14 compute-0 systemd[1]: Started libpod-conmon-c89bfc085270b69c0c92695a8f3b4935de174a54aac120a46d952589accab74f.scope.
Feb 01 15:09:14 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:09:14 compute-0 podman[240989]: 2026-02-01 15:09:14.525878705 +0000 UTC m=+0.104973592 container init c89bfc085270b69c0c92695a8f3b4935de174a54aac120a46d952589accab74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 01 15:09:14 compute-0 podman[240989]: 2026-02-01 15:09:14.531127202 +0000 UTC m=+0.110222069 container start c89bfc085270b69c0c92695a8f3b4935de174a54aac120a46d952589accab74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 01 15:09:14 compute-0 systemd[1]: libpod-c89bfc085270b69c0c92695a8f3b4935de174a54aac120a46d952589accab74f.scope: Deactivated successfully.
Feb 01 15:09:14 compute-0 distracted_sutherland[241005]: 167 167
Feb 01 15:09:14 compute-0 conmon[241005]: conmon c89bfc085270b69c0c92 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c89bfc085270b69c0c92695a8f3b4935de174a54aac120a46d952589accab74f.scope/container/memory.events
Feb 01 15:09:14 compute-0 podman[240989]: 2026-02-01 15:09:14.441859591 +0000 UTC m=+0.020954558 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:09:14 compute-0 podman[240989]: 2026-02-01 15:09:14.537040958 +0000 UTC m=+0.116135845 container attach c89bfc085270b69c0c92695a8f3b4935de174a54aac120a46d952589accab74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 01 15:09:14 compute-0 podman[240989]: 2026-02-01 15:09:14.537399638 +0000 UTC m=+0.116494505 container died c89bfc085270b69c0c92695a8f3b4935de174a54aac120a46d952589accab74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:09:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd37e92aea0f5151b20a0c477390d70db35fdf1778d0cd092d0db50c38801461-merged.mount: Deactivated successfully.
Feb 01 15:09:14 compute-0 podman[240989]: 2026-02-01 15:09:14.613065408 +0000 UTC m=+0.192160275 container remove c89bfc085270b69c0c92695a8f3b4935de174a54aac120a46d952589accab74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 01 15:09:14 compute-0 systemd[1]: libpod-conmon-c89bfc085270b69c0c92695a8f3b4935de174a54aac120a46d952589accab74f.scope: Deactivated successfully.
Feb 01 15:09:14 compute-0 podman[241031]: 2026-02-01 15:09:14.755047694 +0000 UTC m=+0.048240172 container create 5385080fd0f5a638d00f02fbd51b90987aeb1f447c3eff08a22b900ca58cf405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 01 15:09:14 compute-0 systemd[1]: Started libpod-conmon-5385080fd0f5a638d00f02fbd51b90987aeb1f447c3eff08a22b900ca58cf405.scope.
Feb 01 15:09:14 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25c978d789b2ffb08c522b74b1c7f35e52c7d7f1417f62d6b0381172338138e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25c978d789b2ffb08c522b74b1c7f35e52c7d7f1417f62d6b0381172338138e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25c978d789b2ffb08c522b74b1c7f35e52c7d7f1417f62d6b0381172338138e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25c978d789b2ffb08c522b74b1c7f35e52c7d7f1417f62d6b0381172338138e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:09:14 compute-0 podman[241031]: 2026-02-01 15:09:14.826371222 +0000 UTC m=+0.119563790 container init 5385080fd0f5a638d00f02fbd51b90987aeb1f447c3eff08a22b900ca58cf405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 01 15:09:14 compute-0 podman[241031]: 2026-02-01 15:09:14.834394427 +0000 UTC m=+0.127586905 container start 5385080fd0f5a638d00f02fbd51b90987aeb1f447c3eff08a22b900ca58cf405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 01 15:09:14 compute-0 podman[241031]: 2026-02-01 15:09:14.837549686 +0000 UTC m=+0.130742174 container attach 5385080fd0f5a638d00f02fbd51b90987aeb1f447c3eff08a22b900ca58cf405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 01 15:09:14 compute-0 podman[241031]: 2026-02-01 15:09:14.741311079 +0000 UTC m=+0.034503577 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:09:15 compute-0 jolly_turing[241047]: {
Feb 01 15:09:15 compute-0 jolly_turing[241047]:     "0": [
Feb 01 15:09:15 compute-0 jolly_turing[241047]:         {
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "devices": [
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "/dev/loop3"
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             ],
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "lv_name": "ceph_lv0",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "lv_size": "21470642176",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "name": "ceph_lv0",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "tags": {
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.cluster_name": "ceph",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.crush_device_class": "",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.encrypted": "0",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.objectstore": "bluestore",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.osd_id": "0",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.type": "block",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.vdo": "0",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.with_tpm": "0"
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             },
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "type": "block",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "vg_name": "ceph_vg0"
Feb 01 15:09:15 compute-0 jolly_turing[241047]:         }
Feb 01 15:09:15 compute-0 jolly_turing[241047]:     ],
Feb 01 15:09:15 compute-0 jolly_turing[241047]:     "1": [
Feb 01 15:09:15 compute-0 jolly_turing[241047]:         {
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "devices": [
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "/dev/loop4"
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             ],
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "lv_name": "ceph_lv1",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "lv_size": "21470642176",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "name": "ceph_lv1",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "tags": {
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.cluster_name": "ceph",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.crush_device_class": "",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.encrypted": "0",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.objectstore": "bluestore",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.osd_id": "1",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.type": "block",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.vdo": "0",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.with_tpm": "0"
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             },
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "type": "block",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "vg_name": "ceph_vg1"
Feb 01 15:09:15 compute-0 jolly_turing[241047]:         }
Feb 01 15:09:15 compute-0 jolly_turing[241047]:     ],
Feb 01 15:09:15 compute-0 jolly_turing[241047]:     "2": [
Feb 01 15:09:15 compute-0 jolly_turing[241047]:         {
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "devices": [
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "/dev/loop5"
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             ],
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "lv_name": "ceph_lv2",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "lv_size": "21470642176",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "name": "ceph_lv2",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "tags": {
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.cluster_name": "ceph",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.crush_device_class": "",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.encrypted": "0",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.objectstore": "bluestore",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.osd_id": "2",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.type": "block",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.vdo": "0",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:                 "ceph.with_tpm": "0"
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             },
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "type": "block",
Feb 01 15:09:15 compute-0 jolly_turing[241047]:             "vg_name": "ceph_vg2"
Feb 01 15:09:15 compute-0 jolly_turing[241047]:         }
Feb 01 15:09:15 compute-0 jolly_turing[241047]:     ]
Feb 01 15:09:15 compute-0 jolly_turing[241047]: }
Feb 01 15:09:15 compute-0 systemd[1]: libpod-5385080fd0f5a638d00f02fbd51b90987aeb1f447c3eff08a22b900ca58cf405.scope: Deactivated successfully.
Feb 01 15:09:15 compute-0 podman[241031]: 2026-02-01 15:09:15.093958609 +0000 UTC m=+0.387151087 container died 5385080fd0f5a638d00f02fbd51b90987aeb1f447c3eff08a22b900ca58cf405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_turing, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:09:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-d25c978d789b2ffb08c522b74b1c7f35e52c7d7f1417f62d6b0381172338138e-merged.mount: Deactivated successfully.
Feb 01 15:09:15 compute-0 podman[241031]: 2026-02-01 15:09:15.133570249 +0000 UTC m=+0.426762727 container remove 5385080fd0f5a638d00f02fbd51b90987aeb1f447c3eff08a22b900ca58cf405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 01 15:09:15 compute-0 systemd[1]: libpod-conmon-5385080fd0f5a638d00f02fbd51b90987aeb1f447c3eff08a22b900ca58cf405.scope: Deactivated successfully.
Feb 01 15:09:15 compute-0 sudo[240951]: pam_unix(sudo:session): session closed for user root
Feb 01 15:09:15 compute-0 sudo[241068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:09:15 compute-0 sudo[241068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:09:15 compute-0 sudo[241068]: pam_unix(sudo:session): session closed for user root
Feb 01 15:09:15 compute-0 sudo[241093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 15:09:15 compute-0 sudo[241093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:09:15 compute-0 ceph-mon[75179]: pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:15 compute-0 podman[241130]: 2026-02-01 15:09:15.492159255 +0000 UTC m=+0.034655362 container create 870d5d4505a72984db8d978e286723d4a16d623067eecb85315435eff461e454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:09:15 compute-0 systemd[1]: Started libpod-conmon-870d5d4505a72984db8d978e286723d4a16d623067eecb85315435eff461e454.scope.
Feb 01 15:09:15 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:09:15 compute-0 podman[241130]: 2026-02-01 15:09:15.551798926 +0000 UTC m=+0.094295053 container init 870d5d4505a72984db8d978e286723d4a16d623067eecb85315435eff461e454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 01 15:09:15 compute-0 podman[241130]: 2026-02-01 15:09:15.558043111 +0000 UTC m=+0.100539218 container start 870d5d4505a72984db8d978e286723d4a16d623067eecb85315435eff461e454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_einstein, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 01 15:09:15 compute-0 podman[241130]: 2026-02-01 15:09:15.560597062 +0000 UTC m=+0.103093169 container attach 870d5d4505a72984db8d978e286723d4a16d623067eecb85315435eff461e454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_einstein, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:09:15 compute-0 vigorous_einstein[241146]: 167 167
Feb 01 15:09:15 compute-0 systemd[1]: libpod-870d5d4505a72984db8d978e286723d4a16d623067eecb85315435eff461e454.scope: Deactivated successfully.
Feb 01 15:09:15 compute-0 podman[241130]: 2026-02-01 15:09:15.562119015 +0000 UTC m=+0.104615112 container died 870d5d4505a72984db8d978e286723d4a16d623067eecb85315435eff461e454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 01 15:09:15 compute-0 podman[241130]: 2026-02-01 15:09:15.478845782 +0000 UTC m=+0.021341899 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:09:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-e11f84e653dd9bc3ba64162aafbd924897622b8f40b31c41f78b9b7cb54e2f2d-merged.mount: Deactivated successfully.
Feb 01 15:09:15 compute-0 podman[241130]: 2026-02-01 15:09:15.595332895 +0000 UTC m=+0.137829002 container remove 870d5d4505a72984db8d978e286723d4a16d623067eecb85315435eff461e454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_einstein, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 01 15:09:15 compute-0 systemd[1]: libpod-conmon-870d5d4505a72984db8d978e286723d4a16d623067eecb85315435eff461e454.scope: Deactivated successfully.
Feb 01 15:09:15 compute-0 podman[241171]: 2026-02-01 15:09:15.740495272 +0000 UTC m=+0.045949268 container create d0d0d2617be8bb886854618769ef5b3db5a09014a4cd5e453558953c2c315c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:09:15 compute-0 systemd[1]: Started libpod-conmon-d0d0d2617be8bb886854618769ef5b3db5a09014a4cd5e453558953c2c315c76.scope.
Feb 01 15:09:15 compute-0 podman[241171]: 2026-02-01 15:09:15.716245743 +0000 UTC m=+0.021699729 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:09:15 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:09:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1da56caaf3deb125b751ac562ac0c72270af11a04ada87c00a2a7230b9263e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:09:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1da56caaf3deb125b751ac562ac0c72270af11a04ada87c00a2a7230b9263e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:09:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1da56caaf3deb125b751ac562ac0c72270af11a04ada87c00a2a7230b9263e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:09:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1da56caaf3deb125b751ac562ac0c72270af11a04ada87c00a2a7230b9263e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:09:15 compute-0 podman[241171]: 2026-02-01 15:09:15.832597853 +0000 UTC m=+0.138051899 container init d0d0d2617be8bb886854618769ef5b3db5a09014a4cd5e453558953c2c315c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 01 15:09:15 compute-0 podman[241171]: 2026-02-01 15:09:15.839901807 +0000 UTC m=+0.145355763 container start d0d0d2617be8bb886854618769ef5b3db5a09014a4cd5e453558953c2c315c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 01 15:09:15 compute-0 podman[241171]: 2026-02-01 15:09:15.842991854 +0000 UTC m=+0.148445900 container attach d0d0d2617be8bb886854618769ef5b3db5a09014a4cd5e453558953c2c315c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ishizaka, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:09:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:09:16 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:16 compute-0 lvm[241295]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:09:16 compute-0 lvm[241296]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:09:16 compute-0 lvm[241295]: VG ceph_vg1 finished
Feb 01 15:09:16 compute-0 lvm[241296]: VG ceph_vg2 finished
Feb 01 15:09:16 compute-0 lvm[241292]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:09:16 compute-0 lvm[241292]: VG ceph_vg0 finished
Feb 01 15:09:16 compute-0 podman[241262]: 2026-02-01 15:09:16.506110451 +0000 UTC m=+0.081825233 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 01 15:09:16 compute-0 distracted_ishizaka[241187]: {}
Feb 01 15:09:16 compute-0 podman[241263]: 2026-02-01 15:09:16.556931765 +0000 UTC m=+0.122193914 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller)
Feb 01 15:09:16 compute-0 systemd[1]: libpod-d0d0d2617be8bb886854618769ef5b3db5a09014a4cd5e453558953c2c315c76.scope: Deactivated successfully.
Feb 01 15:09:16 compute-0 systemd[1]: libpod-d0d0d2617be8bb886854618769ef5b3db5a09014a4cd5e453558953c2c315c76.scope: Consumed 1.084s CPU time.
Feb 01 15:09:16 compute-0 podman[241171]: 2026-02-01 15:09:16.590573338 +0000 UTC m=+0.896027304 container died d0d0d2617be8bb886854618769ef5b3db5a09014a4cd5e453558953c2c315c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ishizaka, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 01 15:09:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1da56caaf3deb125b751ac562ac0c72270af11a04ada87c00a2a7230b9263e9-merged.mount: Deactivated successfully.
Feb 01 15:09:16 compute-0 podman[241171]: 2026-02-01 15:09:16.633645324 +0000 UTC m=+0.939099290 container remove d0d0d2617be8bb886854618769ef5b3db5a09014a4cd5e453558953c2c315c76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ishizaka, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:09:16 compute-0 systemd[1]: libpod-conmon-d0d0d2617be8bb886854618769ef5b3db5a09014a4cd5e453558953c2c315c76.scope: Deactivated successfully.
Feb 01 15:09:16 compute-0 sudo[241093]: pam_unix(sudo:session): session closed for user root
Feb 01 15:09:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:09:16 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:09:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:09:16 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:09:16 compute-0 sudo[241327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 15:09:16 compute-0 sudo[241327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:09:16 compute-0 sudo[241327]: pam_unix(sudo:session): session closed for user root
Feb 01 15:09:17 compute-0 ceph-mon[75179]: pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:17 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:09:17 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:09:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:09:17
Feb 01 15:09:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:09:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:09:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', '.mgr', 'default.rgw.control']
Feb 01 15:09:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:09:18 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:09:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:09:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:09:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:09:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:09:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:09:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:09:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:09:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:09:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:09:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:09:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:09:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:09:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:09:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:09:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:09:19 compute-0 ceph-mon[75179]: pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:20 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.043665) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958561043689, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1344, "num_deletes": 505, "total_data_size": 1631077, "memory_usage": 1659664, "flush_reason": "Manual Compaction"}
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958561050335, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1604644, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13615, "largest_seqno": 14958, "table_properties": {"data_size": 1598706, "index_size": 2758, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 14990, "raw_average_key_size": 18, "raw_value_size": 1584871, "raw_average_value_size": 1911, "num_data_blocks": 126, "num_entries": 829, "num_filter_entries": 829, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769958457, "oldest_key_time": 1769958457, "file_creation_time": 1769958561, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 6708 microseconds, and 2921 cpu microseconds.
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.050372) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1604644 bytes OK
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.050389) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.051444) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.051457) EVENT_LOG_v1 {"time_micros": 1769958561051453, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.051472) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1623972, prev total WAL file size 1623972, number of live WAL files 2.
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.051862) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1567KB)], [32(7659KB)]
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958561051893, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9447580, "oldest_snapshot_seqno": -1}
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3850 keys, 7479501 bytes, temperature: kUnknown
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958561089481, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7479501, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7451875, "index_size": 16892, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9669, "raw_key_size": 94165, "raw_average_key_size": 24, "raw_value_size": 7380321, "raw_average_value_size": 1916, "num_data_blocks": 717, "num_entries": 3850, "num_filter_entries": 3850, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769958561, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.089797) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7479501 bytes
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.090861) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 250.6 rd, 198.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.5 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(10.5) write-amplify(4.7) OK, records in: 4873, records dropped: 1023 output_compression: NoCompression
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.090890) EVENT_LOG_v1 {"time_micros": 1769958561090875, "job": 14, "event": "compaction_finished", "compaction_time_micros": 37695, "compaction_time_cpu_micros": 11688, "output_level": 6, "num_output_files": 1, "total_output_size": 7479501, "num_input_records": 4873, "num_output_records": 3850, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958561091272, "job": 14, "event": "table_file_deletion", "file_number": 34}
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958561092480, "job": 14, "event": "table_file_deletion", "file_number": 32}
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.051802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.092558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.092562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.092564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.092566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:09:21 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:09:21.092568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:09:21 compute-0 ceph-mon[75179]: pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:22 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:23 compute-0 ceph-mon[75179]: pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:24 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:25 compute-0 ceph-mon[75179]: pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:09:26 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:27 compute-0 ceph-mon[75179]: pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:09:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:09:29 compute-0 ceph-mon[75179]: pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:30 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:09:32 compute-0 ceph-mon[75179]: pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:32 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:33 compute-0 ceph-mon[75179]: pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:34 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:35 compute-0 ceph-mon[75179]: pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:09:36 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:37 compute-0 ceph-mon[75179]: pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:38 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:39 compute-0 ceph-mon[75179]: pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:40 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:09:41 compute-0 ceph-mon[75179]: pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:41 compute-0 nova_compute[238794]: 2026-02-01 15:09:41.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:09:41 compute-0 nova_compute[238794]: 2026-02-01 15:09:41.319 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 01 15:09:42 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:42 compute-0 nova_compute[238794]: 2026-02-01 15:09:42.316 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:09:42 compute-0 nova_compute[238794]: 2026-02-01 15:09:42.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:09:42 compute-0 nova_compute[238794]: 2026-02-01 15:09:42.319 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 01 15:09:42 compute-0 nova_compute[238794]: 2026-02-01 15:09:42.319 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 01 15:09:42 compute-0 nova_compute[238794]: 2026-02-01 15:09:42.540 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 01 15:09:42 compute-0 nova_compute[238794]: 2026-02-01 15:09:42.542 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:09:42 compute-0 nova_compute[238794]: 2026-02-01 15:09:42.542 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:09:43 compute-0 nova_compute[238794]: 2026-02-01 15:09:43.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:09:43 compute-0 nova_compute[238794]: 2026-02-01 15:09:43.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:09:43 compute-0 nova_compute[238794]: 2026-02-01 15:09:43.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:09:43 compute-0 nova_compute[238794]: 2026-02-01 15:09:43.558 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:09:43 compute-0 nova_compute[238794]: 2026-02-01 15:09:43.559 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:09:43 compute-0 nova_compute[238794]: 2026-02-01 15:09:43.559 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:09:43 compute-0 nova_compute[238794]: 2026-02-01 15:09:43.559 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 01 15:09:43 compute-0 nova_compute[238794]: 2026-02-01 15:09:43.560 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:09:43 compute-0 ceph-mon[75179]: pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:44 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:09:44 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2831456260' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:09:44 compute-0 nova_compute[238794]: 2026-02-01 15:09:44.118 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:09:44 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:44 compute-0 nova_compute[238794]: 2026-02-01 15:09:44.247 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 01 15:09:44 compute-0 nova_compute[238794]: 2026-02-01 15:09:44.248 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5114MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 01 15:09:44 compute-0 nova_compute[238794]: 2026-02-01 15:09:44.248 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:09:44 compute-0 nova_compute[238794]: 2026-02-01 15:09:44.248 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:09:44 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2831456260' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:09:44 compute-0 nova_compute[238794]: 2026-02-01 15:09:44.958 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 01 15:09:44 compute-0 nova_compute[238794]: 2026-02-01 15:09:44.958 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 01 15:09:44 compute-0 nova_compute[238794]: 2026-02-01 15:09:44.979 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:09:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:09:45 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2647746395' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:09:45 compute-0 nova_compute[238794]: 2026-02-01 15:09:45.530 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:09:45 compute-0 nova_compute[238794]: 2026-02-01 15:09:45.534 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 01 15:09:45 compute-0 nova_compute[238794]: 2026-02-01 15:09:45.584 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 01 15:09:45 compute-0 nova_compute[238794]: 2026-02-01 15:09:45.586 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 01 15:09:45 compute-0 nova_compute[238794]: 2026-02-01 15:09:45.586 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:09:45 compute-0 ceph-mon[75179]: pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:45 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2647746395' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:09:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:09:46 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:46 compute-0 nova_compute[238794]: 2026-02-01 15:09:46.586 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:09:46 compute-0 podman[241396]: 2026-02-01 15:09:46.974789154 +0000 UTC m=+0.057717058 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 01 15:09:46 compute-0 podman[241397]: 2026-02-01 15:09:46.993988831 +0000 UTC m=+0.077419729 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 01 15:09:47 compute-0 ceph-mon[75179]: pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:09:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:09:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:09:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:09:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:09:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:09:49 compute-0 ceph-mon[75179]: pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:50 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 01 15:09:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3327240749' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:09:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 01 15:09:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3327240749' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:09:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:09:51 compute-0 ceph-mon[75179]: pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3327240749' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:09:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3327240749' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:09:52 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:53 compute-0 ceph-mon[75179]: pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:54 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:55 compute-0 ceph-mon[75179]: pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:09:56 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:57 compute-0 ceph-mon[75179]: pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:58 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:09:59 compute-0 ceph-mon[75179]: pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:00 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:00 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 15:10:00 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3385 writes, 15K keys, 3385 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3385 writes, 3385 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1288 writes, 5858 keys, 1288 commit groups, 1.0 writes per commit group, ingest: 8.63 MB, 0.01 MB/s
                                           Interval WAL: 1288 writes, 1288 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    172.0      0.09              0.04         7    0.013       0      0       0.0       0.0
                                             L6      1/0    7.13 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.6    248.9    205.1      0.21              0.09         6    0.034     24K   3194       0.0       0.0
                                            Sum      1/0    7.13 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.6    171.0    194.8      0.30              0.13        13    0.023     24K   3194       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8    198.5    200.5      0.18              0.07         8    0.022     17K   2464       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0    248.9    205.1      0.21              0.09         6    0.034     24K   3194       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    181.5      0.09              0.04         6    0.015       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.016, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.3 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5635c5d4b8d0#2 capacity: 308.00 MB usage: 1.83 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(106,1.61 MB,0.522832%) FilterBlock(14,74.98 KB,0.023775%) IndexBlock(14,153.55 KB,0.0486845%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 01 15:10:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:10:01 compute-0 ceph-mon[75179]: pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:02 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:03 compute-0 ceph-mon[75179]: pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:04 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:05 compute-0 ceph-mon[75179]: pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:10:06 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:07 compute-0 ceph-mon[75179]: pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:10:07.805 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:10:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:10:07.805 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:10:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:10:07.805 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:10:08 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:09 compute-0 ceph-mon[75179]: pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:10 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:10:11 compute-0 ceph-mon[75179]: pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:12 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:13 compute-0 ceph-mon[75179]: pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:14 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:15 compute-0 ceph-mon[75179]: pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:10:16 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:16 compute-0 sudo[241442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:10:16 compute-0 sudo[241442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:10:16 compute-0 sudo[241442]: pam_unix(sudo:session): session closed for user root
Feb 01 15:10:16 compute-0 sudo[241467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 15:10:16 compute-0 sudo[241467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:10:17 compute-0 sudo[241467]: pam_unix(sudo:session): session closed for user root
Feb 01 15:10:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:10:17 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:10:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 15:10:17 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:10:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 15:10:17 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:10:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 15:10:17 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:10:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 15:10:17 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:10:17 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:10:17 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:10:17 compute-0 sudo[241523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:10:17 compute-0 sudo[241523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:10:17 compute-0 sudo[241523]: pam_unix(sudo:session): session closed for user root
Feb 01 15:10:17 compute-0 sudo[241560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 15:10:17 compute-0 sudo[241560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:10:17 compute-0 podman[241547]: 2026-02-01 15:10:17.689803658 +0000 UTC m=+0.113671886 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 01 15:10:17 compute-0 podman[241548]: 2026-02-01 15:10:17.709273343 +0000 UTC m=+0.132153453 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 01 15:10:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:10:17
Feb 01 15:10:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:10:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:10:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'volumes', 'images', '.rgw.root', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control']
Feb 01 15:10:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:10:17 compute-0 ceph-mon[75179]: pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:17 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:10:17 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:10:17 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:10:17 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:10:17 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:10:17 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:10:17 compute-0 podman[241628]: 2026-02-01 15:10:17.888922096 +0000 UTC m=+0.046562135 container create 7060d8f9c453f66fa24801e3814de1efe91b1c677ce291cae272b980f047391b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_golick, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 01 15:10:17 compute-0 systemd[1]: Started libpod-conmon-7060d8f9c453f66fa24801e3814de1efe91b1c677ce291cae272b980f047391b.scope.
Feb 01 15:10:17 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:10:17 compute-0 podman[241628]: 2026-02-01 15:10:17.863466673 +0000 UTC m=+0.021106812 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:10:17 compute-0 podman[241628]: 2026-02-01 15:10:17.962983721 +0000 UTC m=+0.120623850 container init 7060d8f9c453f66fa24801e3814de1efe91b1c677ce291cae272b980f047391b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_golick, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 01 15:10:17 compute-0 podman[241628]: 2026-02-01 15:10:17.971889411 +0000 UTC m=+0.129529460 container start 7060d8f9c453f66fa24801e3814de1efe91b1c677ce291cae272b980f047391b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_golick, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 01 15:10:17 compute-0 podman[241628]: 2026-02-01 15:10:17.975958815 +0000 UTC m=+0.133598954 container attach 7060d8f9c453f66fa24801e3814de1efe91b1c677ce291cae272b980f047391b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_golick, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:10:17 compute-0 systemd[1]: libpod-7060d8f9c453f66fa24801e3814de1efe91b1c677ce291cae272b980f047391b.scope: Deactivated successfully.
Feb 01 15:10:17 compute-0 blissful_golick[241645]: 167 167
Feb 01 15:10:17 compute-0 podman[241628]: 2026-02-01 15:10:17.978381862 +0000 UTC m=+0.136021931 container died 7060d8f9c453f66fa24801e3814de1efe91b1c677ce291cae272b980f047391b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_golick, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:10:17 compute-0 conmon[241645]: conmon 7060d8f9c453f66fa248 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7060d8f9c453f66fa24801e3814de1efe91b1c677ce291cae272b980f047391b.scope/container/memory.events
Feb 01 15:10:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-33e09286c57d7b73ad0b2d887f4565c958e61e46d31ddc8467c1f9923fbd00b5-merged.mount: Deactivated successfully.
Feb 01 15:10:18 compute-0 podman[241628]: 2026-02-01 15:10:18.024945337 +0000 UTC m=+0.182585416 container remove 7060d8f9c453f66fa24801e3814de1efe91b1c677ce291cae272b980f047391b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_golick, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:10:18 compute-0 systemd[1]: libpod-conmon-7060d8f9c453f66fa24801e3814de1efe91b1c677ce291cae272b980f047391b.scope: Deactivated successfully.
Feb 01 15:10:18 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:18 compute-0 podman[241667]: 2026-02-01 15:10:18.205957428 +0000 UTC m=+0.054494748 container create f2f0f8d6b68c27848c40a582de2ead9931c26e2dc719acea7db8a029904c0977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mcnulty, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 01 15:10:18 compute-0 systemd[1]: Started libpod-conmon-f2f0f8d6b68c27848c40a582de2ead9931c26e2dc719acea7db8a029904c0977.scope.
Feb 01 15:10:18 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e832c77ac4acfa2a3fcedc7f7466f76ba05d50af62ee4c577cec87b0f2406b68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e832c77ac4acfa2a3fcedc7f7466f76ba05d50af62ee4c577cec87b0f2406b68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e832c77ac4acfa2a3fcedc7f7466f76ba05d50af62ee4c577cec87b0f2406b68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e832c77ac4acfa2a3fcedc7f7466f76ba05d50af62ee4c577cec87b0f2406b68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e832c77ac4acfa2a3fcedc7f7466f76ba05d50af62ee4c577cec87b0f2406b68/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 15:10:18 compute-0 podman[241667]: 2026-02-01 15:10:18.179182688 +0000 UTC m=+0.027720098 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:10:18 compute-0 podman[241667]: 2026-02-01 15:10:18.300460046 +0000 UTC m=+0.148997386 container init f2f0f8d6b68c27848c40a582de2ead9931c26e2dc719acea7db8a029904c0977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 01 15:10:18 compute-0 podman[241667]: 2026-02-01 15:10:18.308901102 +0000 UTC m=+0.157438452 container start f2f0f8d6b68c27848c40a582de2ead9931c26e2dc719acea7db8a029904c0977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 01 15:10:18 compute-0 podman[241667]: 2026-02-01 15:10:18.313593174 +0000 UTC m=+0.162130504 container attach f2f0f8d6b68c27848c40a582de2ead9931c26e2dc719acea7db8a029904c0977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mcnulty, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 01 15:10:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:10:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:10:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:10:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:10:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:10:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:10:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:10:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:10:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:10:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:10:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:10:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:10:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:10:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:10:18 compute-0 peaceful_mcnulty[241683]: --> passed data devices: 0 physical, 3 LVM
Feb 01 15:10:18 compute-0 peaceful_mcnulty[241683]: --> All data devices are unavailable
Feb 01 15:10:18 compute-0 systemd[1]: libpod-f2f0f8d6b68c27848c40a582de2ead9931c26e2dc719acea7db8a029904c0977.scope: Deactivated successfully.
Feb 01 15:10:18 compute-0 podman[241667]: 2026-02-01 15:10:18.755145084 +0000 UTC m=+0.603682404 container died f2f0f8d6b68c27848c40a582de2ead9931c26e2dc719acea7db8a029904c0977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 01 15:10:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e832c77ac4acfa2a3fcedc7f7466f76ba05d50af62ee4c577cec87b0f2406b68-merged.mount: Deactivated successfully.
Feb 01 15:10:18 compute-0 podman[241667]: 2026-02-01 15:10:18.793353044 +0000 UTC m=+0.641890354 container remove f2f0f8d6b68c27848c40a582de2ead9931c26e2dc719acea7db8a029904c0977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:10:18 compute-0 systemd[1]: libpod-conmon-f2f0f8d6b68c27848c40a582de2ead9931c26e2dc719acea7db8a029904c0977.scope: Deactivated successfully.
Feb 01 15:10:18 compute-0 sudo[241560]: pam_unix(sudo:session): session closed for user root
Feb 01 15:10:18 compute-0 sudo[241716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:10:18 compute-0 sudo[241716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:10:18 compute-0 sudo[241716]: pam_unix(sudo:session): session closed for user root
Feb 01 15:10:18 compute-0 sudo[241741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 15:10:18 compute-0 sudo[241741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:10:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:10:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:10:19 compute-0 podman[241778]: 2026-02-01 15:10:19.220433368 +0000 UTC m=+0.029846487 container create b45492885904ff6b0e1c60c303b95bde0acf129c0533393f9e6383b833e525e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:10:19 compute-0 systemd[1]: Started libpod-conmon-b45492885904ff6b0e1c60c303b95bde0acf129c0533393f9e6383b833e525e8.scope.
Feb 01 15:10:19 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:10:19 compute-0 podman[241778]: 2026-02-01 15:10:19.282781675 +0000 UTC m=+0.092194804 container init b45492885904ff6b0e1c60c303b95bde0acf129c0533393f9e6383b833e525e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 01 15:10:19 compute-0 podman[241778]: 2026-02-01 15:10:19.286397786 +0000 UTC m=+0.095810915 container start b45492885904ff6b0e1c60c303b95bde0acf129c0533393f9e6383b833e525e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kapitsa, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 01 15:10:19 compute-0 systemd[1]: libpod-b45492885904ff6b0e1c60c303b95bde0acf129c0533393f9e6383b833e525e8.scope: Deactivated successfully.
Feb 01 15:10:19 compute-0 blissful_kapitsa[241795]: 167 167
Feb 01 15:10:19 compute-0 podman[241778]: 2026-02-01 15:10:19.290159642 +0000 UTC m=+0.099572771 container attach b45492885904ff6b0e1c60c303b95bde0acf129c0533393f9e6383b833e525e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:10:19 compute-0 conmon[241795]: conmon b45492885904ff6b0e1c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b45492885904ff6b0e1c60c303b95bde0acf129c0533393f9e6383b833e525e8.scope/container/memory.events
Feb 01 15:10:19 compute-0 podman[241778]: 2026-02-01 15:10:19.29045404 +0000 UTC m=+0.099867169 container died b45492885904ff6b0e1c60c303b95bde0acf129c0533393f9e6383b833e525e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kapitsa, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:10:19 compute-0 podman[241778]: 2026-02-01 15:10:19.206833077 +0000 UTC m=+0.016246236 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:10:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fa7887fba4f86234be9b0b7530fa74aeec0263c112712398106ea6611231215-merged.mount: Deactivated successfully.
Feb 01 15:10:19 compute-0 podman[241778]: 2026-02-01 15:10:19.322107957 +0000 UTC m=+0.131521076 container remove b45492885904ff6b0e1c60c303b95bde0acf129c0533393f9e6383b833e525e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_kapitsa, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:10:19 compute-0 systemd[1]: libpod-conmon-b45492885904ff6b0e1c60c303b95bde0acf129c0533393f9e6383b833e525e8.scope: Deactivated successfully.
Feb 01 15:10:19 compute-0 podman[241819]: 2026-02-01 15:10:19.433502188 +0000 UTC m=+0.034157538 container create a37dbe07823a76b5aa112364e3f8350d439c162fa2bcd850f5fe309f77f263af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khayyam, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:10:19 compute-0 systemd[1]: Started libpod-conmon-a37dbe07823a76b5aa112364e3f8350d439c162fa2bcd850f5fe309f77f263af.scope.
Feb 01 15:10:19 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682e0d0174d941d4fec23e6d20bd638e37bac1b2a6899668aad7cdc984e86e7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682e0d0174d941d4fec23e6d20bd638e37bac1b2a6899668aad7cdc984e86e7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682e0d0174d941d4fec23e6d20bd638e37bac1b2a6899668aad7cdc984e86e7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682e0d0174d941d4fec23e6d20bd638e37bac1b2a6899668aad7cdc984e86e7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:10:19 compute-0 podman[241819]: 2026-02-01 15:10:19.51390957 +0000 UTC m=+0.114565010 container init a37dbe07823a76b5aa112364e3f8350d439c162fa2bcd850f5fe309f77f263af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:10:19 compute-0 podman[241819]: 2026-02-01 15:10:19.418783825 +0000 UTC m=+0.019439205 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:10:19 compute-0 podman[241819]: 2026-02-01 15:10:19.519590519 +0000 UTC m=+0.120245889 container start a37dbe07823a76b5aa112364e3f8350d439c162fa2bcd850f5fe309f77f263af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:10:19 compute-0 podman[241819]: 2026-02-01 15:10:19.522941833 +0000 UTC m=+0.123597183 container attach a37dbe07823a76b5aa112364e3f8350d439c162fa2bcd850f5fe309f77f263af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khayyam, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 01 15:10:19 compute-0 focused_khayyam[241836]: {
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:     "0": [
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:         {
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "devices": [
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "/dev/loop3"
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             ],
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "lv_name": "ceph_lv0",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "lv_size": "21470642176",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "name": "ceph_lv0",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "tags": {
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.cluster_name": "ceph",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.crush_device_class": "",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.encrypted": "0",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.objectstore": "bluestore",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.osd_id": "0",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.type": "block",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.vdo": "0",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.with_tpm": "0"
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             },
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "type": "block",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "vg_name": "ceph_vg0"
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:         }
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:     ],
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:     "1": [
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:         {
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "devices": [
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "/dev/loop4"
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             ],
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "lv_name": "ceph_lv1",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "lv_size": "21470642176",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "name": "ceph_lv1",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "tags": {
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.cluster_name": "ceph",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.crush_device_class": "",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.encrypted": "0",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.objectstore": "bluestore",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.osd_id": "1",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.type": "block",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.vdo": "0",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.with_tpm": "0"
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             },
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "type": "block",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "vg_name": "ceph_vg1"
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:         }
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:     ],
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:     "2": [
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:         {
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "devices": [
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "/dev/loop5"
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             ],
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "lv_name": "ceph_lv2",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "lv_size": "21470642176",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "name": "ceph_lv2",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "tags": {
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.cluster_name": "ceph",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.crush_device_class": "",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.encrypted": "0",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.objectstore": "bluestore",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.osd_id": "2",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.type": "block",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.vdo": "0",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:                 "ceph.with_tpm": "0"
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             },
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "type": "block",
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:             "vg_name": "ceph_vg2"
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:         }
Feb 01 15:10:19 compute-0 focused_khayyam[241836]:     ]
Feb 01 15:10:19 compute-0 focused_khayyam[241836]: }
Feb 01 15:10:19 compute-0 systemd[1]: libpod-a37dbe07823a76b5aa112364e3f8350d439c162fa2bcd850f5fe309f77f263af.scope: Deactivated successfully.
Feb 01 15:10:19 compute-0 podman[241819]: 2026-02-01 15:10:19.760837598 +0000 UTC m=+0.361492938 container died a37dbe07823a76b5aa112364e3f8350d439c162fa2bcd850f5fe309f77f263af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Feb 01 15:10:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-682e0d0174d941d4fec23e6d20bd638e37bac1b2a6899668aad7cdc984e86e7f-merged.mount: Deactivated successfully.
Feb 01 15:10:19 compute-0 podman[241819]: 2026-02-01 15:10:19.799580124 +0000 UTC m=+0.400235474 container remove a37dbe07823a76b5aa112364e3f8350d439c162fa2bcd850f5fe309f77f263af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khayyam, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:10:19 compute-0 ceph-mon[75179]: pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:19 compute-0 systemd[1]: libpod-conmon-a37dbe07823a76b5aa112364e3f8350d439c162fa2bcd850f5fe309f77f263af.scope: Deactivated successfully.
Feb 01 15:10:19 compute-0 sudo[241741]: pam_unix(sudo:session): session closed for user root
Feb 01 15:10:19 compute-0 sudo[241857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:10:19 compute-0 sudo[241857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:10:19 compute-0 sudo[241857]: pam_unix(sudo:session): session closed for user root
Feb 01 15:10:19 compute-0 sudo[241882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 15:10:19 compute-0 sudo[241882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:10:20 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:20 compute-0 podman[241919]: 2026-02-01 15:10:20.206233956 +0000 UTC m=+0.049325943 container create 1e3dbd17ae16f67f91fd1a7b10d6b31c817e6cae61a8737cb9c880a678b1bd41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_yonath, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 01 15:10:20 compute-0 systemd[1]: Started libpod-conmon-1e3dbd17ae16f67f91fd1a7b10d6b31c817e6cae61a8737cb9c880a678b1bd41.scope.
Feb 01 15:10:20 compute-0 podman[241919]: 2026-02-01 15:10:20.181715799 +0000 UTC m=+0.024807806 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:10:20 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:10:20 compute-0 podman[241919]: 2026-02-01 15:10:20.294123649 +0000 UTC m=+0.137215646 container init 1e3dbd17ae16f67f91fd1a7b10d6b31c817e6cae61a8737cb9c880a678b1bd41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_yonath, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 01 15:10:20 compute-0 podman[241919]: 2026-02-01 15:10:20.302330388 +0000 UTC m=+0.145422355 container start 1e3dbd17ae16f67f91fd1a7b10d6b31c817e6cae61a8737cb9c880a678b1bd41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:10:20 compute-0 practical_yonath[241935]: 167 167
Feb 01 15:10:20 compute-0 systemd[1]: libpod-1e3dbd17ae16f67f91fd1a7b10d6b31c817e6cae61a8737cb9c880a678b1bd41.scope: Deactivated successfully.
Feb 01 15:10:20 compute-0 podman[241919]: 2026-02-01 15:10:20.307521414 +0000 UTC m=+0.150613491 container attach 1e3dbd17ae16f67f91fd1a7b10d6b31c817e6cae61a8737cb9c880a678b1bd41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_yonath, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 01 15:10:20 compute-0 podman[241919]: 2026-02-01 15:10:20.30846427 +0000 UTC m=+0.151556267 container died 1e3dbd17ae16f67f91fd1a7b10d6b31c817e6cae61a8737cb9c880a678b1bd41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_yonath, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Feb 01 15:10:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-2500c1f01c87256a3615bdbab2673e74dbd901f51ab80a229c94e6c019ccd281-merged.mount: Deactivated successfully.
Feb 01 15:10:20 compute-0 podman[241919]: 2026-02-01 15:10:20.402427653 +0000 UTC m=+0.245519620 container remove 1e3dbd17ae16f67f91fd1a7b10d6b31c817e6cae61a8737cb9c880a678b1bd41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 01 15:10:20 compute-0 systemd[1]: libpod-conmon-1e3dbd17ae16f67f91fd1a7b10d6b31c817e6cae61a8737cb9c880a678b1bd41.scope: Deactivated successfully.
Feb 01 15:10:20 compute-0 podman[241959]: 2026-02-01 15:10:20.563219898 +0000 UTC m=+0.052182083 container create 9a6c9e6f1014e99040b0032048a93fbb14ea640e2a4d88c38b0be7aec14fc5f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 01 15:10:20 compute-0 systemd[1]: Started libpod-conmon-9a6c9e6f1014e99040b0032048a93fbb14ea640e2a4d88c38b0be7aec14fc5f8.scope.
Feb 01 15:10:20 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f9455396a6683e206ddbe717e1d7d07c08a165d505e7bc8163c26e3d9af9d9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f9455396a6683e206ddbe717e1d7d07c08a165d505e7bc8163c26e3d9af9d9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f9455396a6683e206ddbe717e1d7d07c08a165d505e7bc8163c26e3d9af9d9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:10:20 compute-0 podman[241959]: 2026-02-01 15:10:20.546264113 +0000 UTC m=+0.035226298 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f9455396a6683e206ddbe717e1d7d07c08a165d505e7bc8163c26e3d9af9d9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:10:20 compute-0 podman[241959]: 2026-02-01 15:10:20.663738264 +0000 UTC m=+0.152700479 container init 9a6c9e6f1014e99040b0032048a93fbb14ea640e2a4d88c38b0be7aec14fc5f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_carson, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:10:20 compute-0 podman[241959]: 2026-02-01 15:10:20.675147673 +0000 UTC m=+0.164109858 container start 9a6c9e6f1014e99040b0032048a93fbb14ea640e2a4d88c38b0be7aec14fc5f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 01 15:10:20 compute-0 podman[241959]: 2026-02-01 15:10:20.678418435 +0000 UTC m=+0.167380860 container attach 9a6c9e6f1014e99040b0032048a93fbb14ea640e2a4d88c38b0be7aec14fc5f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 01 15:10:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:10:21 compute-0 lvm[242054]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:10:21 compute-0 lvm[242053]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:10:21 compute-0 lvm[242053]: VG ceph_vg0 finished
Feb 01 15:10:21 compute-0 lvm[242054]: VG ceph_vg1 finished
Feb 01 15:10:21 compute-0 lvm[242056]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:10:21 compute-0 lvm[242056]: VG ceph_vg2 finished
Feb 01 15:10:21 compute-0 zen_carson[241975]: {}
Feb 01 15:10:21 compute-0 systemd[1]: libpod-9a6c9e6f1014e99040b0032048a93fbb14ea640e2a4d88c38b0be7aec14fc5f8.scope: Deactivated successfully.
Feb 01 15:10:21 compute-0 systemd[1]: libpod-9a6c9e6f1014e99040b0032048a93fbb14ea640e2a4d88c38b0be7aec14fc5f8.scope: Consumed 1.159s CPU time.
Feb 01 15:10:21 compute-0 podman[241959]: 2026-02-01 15:10:21.492126011 +0000 UTC m=+0.981088196 container died 9a6c9e6f1014e99040b0032048a93fbb14ea640e2a4d88c38b0be7aec14fc5f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 01 15:10:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f9455396a6683e206ddbe717e1d7d07c08a165d505e7bc8163c26e3d9af9d9a-merged.mount: Deactivated successfully.
Feb 01 15:10:21 compute-0 podman[241959]: 2026-02-01 15:10:21.534983282 +0000 UTC m=+1.023945467 container remove 9a6c9e6f1014e99040b0032048a93fbb14ea640e2a4d88c38b0be7aec14fc5f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Feb 01 15:10:21 compute-0 systemd[1]: libpod-conmon-9a6c9e6f1014e99040b0032048a93fbb14ea640e2a4d88c38b0be7aec14fc5f8.scope: Deactivated successfully.
Feb 01 15:10:21 compute-0 sudo[241882]: pam_unix(sudo:session): session closed for user root
Feb 01 15:10:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:10:21 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:10:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:10:21 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:10:21 compute-0 sudo[242071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 15:10:21 compute-0 sudo[242071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:10:21 compute-0 sudo[242071]: pam_unix(sudo:session): session closed for user root
Feb 01 15:10:21 compute-0 ceph-mon[75179]: pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:21 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:10:21 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:10:22 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:23 compute-0 ceph-mon[75179]: pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:24 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:25 compute-0 ceph-mon[75179]: pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:10:26 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:27 compute-0 ceph-mon[75179]: pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:10:28 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:29 compute-0 ceph-mon[75179]: pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:30 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:10:31 compute-0 ceph-mon[75179]: pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:32 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:33 compute-0 ceph-mon[75179]: pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:34 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:35 compute-0 ceph-mon[75179]: pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:10:36 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:37 compute-0 ceph-mon[75179]: pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:38 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:39 compute-0 ceph-mon[75179]: pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:40 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:10:41 compute-0 ceph-mon[75179]: pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:42 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:42 compute-0 nova_compute[238794]: 2026-02-01 15:10:42.315 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:10:42 compute-0 nova_compute[238794]: 2026-02-01 15:10:42.316 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:10:43 compute-0 ceph-mon[75179]: pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:43 compute-0 nova_compute[238794]: 2026-02-01 15:10:43.469 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:10:43 compute-0 nova_compute[238794]: 2026-02-01 15:10:43.469 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 01 15:10:43 compute-0 nova_compute[238794]: 2026-02-01 15:10:43.470 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 01 15:10:43 compute-0 nova_compute[238794]: 2026-02-01 15:10:43.689 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 01 15:10:43 compute-0 nova_compute[238794]: 2026-02-01 15:10:43.690 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:10:43 compute-0 nova_compute[238794]: 2026-02-01 15:10:43.690 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:10:43 compute-0 nova_compute[238794]: 2026-02-01 15:10:43.691 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 01 15:10:44 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:44 compute-0 nova_compute[238794]: 2026-02-01 15:10:44.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:10:44 compute-0 nova_compute[238794]: 2026-02-01 15:10:44.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:10:44 compute-0 nova_compute[238794]: 2026-02-01 15:10:44.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:10:44 compute-0 nova_compute[238794]: 2026-02-01 15:10:44.343 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:10:44 compute-0 nova_compute[238794]: 2026-02-01 15:10:44.343 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:10:44 compute-0 nova_compute[238794]: 2026-02-01 15:10:44.343 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:10:44 compute-0 nova_compute[238794]: 2026-02-01 15:10:44.343 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 01 15:10:44 compute-0 nova_compute[238794]: 2026-02-01 15:10:44.344 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:10:44 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:10:44 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4072529224' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:10:44 compute-0 nova_compute[238794]: 2026-02-01 15:10:44.919 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:10:45 compute-0 nova_compute[238794]: 2026-02-01 15:10:45.048 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 01 15:10:45 compute-0 nova_compute[238794]: 2026-02-01 15:10:45.049 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5120MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 01 15:10:45 compute-0 nova_compute[238794]: 2026-02-01 15:10:45.050 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:10:45 compute-0 nova_compute[238794]: 2026-02-01 15:10:45.050 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:10:45 compute-0 nova_compute[238794]: 2026-02-01 15:10:45.119 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 01 15:10:45 compute-0 nova_compute[238794]: 2026-02-01 15:10:45.120 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 01 15:10:45 compute-0 nova_compute[238794]: 2026-02-01 15:10:45.138 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:10:45 compute-0 ceph-mon[75179]: pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:45 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/4072529224' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:10:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:10:45 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3491844404' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:10:45 compute-0 nova_compute[238794]: 2026-02-01 15:10:45.636 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:10:45 compute-0 nova_compute[238794]: 2026-02-01 15:10:45.643 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 01 15:10:45 compute-0 nova_compute[238794]: 2026-02-01 15:10:45.663 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 01 15:10:45 compute-0 nova_compute[238794]: 2026-02-01 15:10:45.666 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 01 15:10:45 compute-0 nova_compute[238794]: 2026-02-01 15:10:45.667 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:10:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:10:46 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:46 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3491844404' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:10:46 compute-0 nova_compute[238794]: 2026-02-01 15:10:46.668 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:10:46 compute-0 nova_compute[238794]: 2026-02-01 15:10:46.668 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:10:47 compute-0 ceph-mon[75179]: pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:47 compute-0 podman[242140]: 2026-02-01 15:10:47.991663247 +0000 UTC m=+0.076685540 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 01 15:10:48 compute-0 podman[242141]: 2026-02-01 15:10:48.025153195 +0000 UTC m=+0.105029563 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3)
Feb 01 15:10:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:10:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:10:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:10:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:10:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:10:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:10:49 compute-0 ceph-mon[75179]: pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:50 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 01 15:10:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3244925717' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:10:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 01 15:10:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3244925717' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:10:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:10:51 compute-0 ceph-mon[75179]: pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3244925717' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:10:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3244925717' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:10:52 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:53 compute-0 ceph-mon[75179]: pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:54 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:55 compute-0 ceph-mon[75179]: pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:10:56 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:57 compute-0 ceph-mon[75179]: pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:58 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:10:59 compute-0 ceph-mon[75179]: pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:00 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:00 compute-0 sshd-session[242183]: Invalid user sol from 80.94.92.171 port 55800
Feb 01 15:11:00 compute-0 sshd-session[242183]: Connection closed by invalid user sol 80.94.92.171 port 55800 [preauth]
Feb 01 15:11:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:11:01 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 15:11:01 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 5863 writes, 24K keys, 5863 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5863 writes, 1012 syncs, 5.79 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b61223a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b61223a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b61223a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 01 15:11:01 compute-0 ceph-mon[75179]: pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:02 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:03 compute-0 ceph-mon[75179]: pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:04 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:04 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 15:11:04 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 7147 writes, 29K keys, 7147 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7147 writes, 1430 syncs, 5.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 01 15:11:05 compute-0 ceph-mon[75179]: pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:11:06 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:07 compute-0 ceph-mon[75179]: pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:11:07.806 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:11:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:11:07.807 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:11:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:11:07.807 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:11:08 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:08 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 15:11:08 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5731 writes, 24K keys, 5731 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5731 writes, 924 syncs, 6.20 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 01 15:11:09 compute-0 ceph-mon[75179]: pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:10 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:11 compute-0 ceph-mgr[75469]: [devicehealth INFO root] Check health
Feb 01 15:11:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:11:11 compute-0 ceph-mon[75179]: pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:12 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:13 compute-0 ceph-mon[75179]: pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:14 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:15 compute-0 ceph-mon[75179]: pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:11:16 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:11:17
Feb 01 15:11:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:11:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:11:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'vms', 'volumes', '.mgr']
Feb 01 15:11:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:11:17 compute-0 ceph-mon[75179]: pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:18 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:11:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:11:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:11:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:11:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:11:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:11:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:11:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:11:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:11:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:11:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:11:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:11:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:11:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:11:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:11:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:11:19 compute-0 podman[242185]: 2026-02-01 15:11:19.014087245 +0000 UTC m=+0.092317662 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Feb 01 15:11:19 compute-0 podman[242186]: 2026-02-01 15:11:19.045922799 +0000 UTC m=+0.126091161 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Feb 01 15:11:19 compute-0 ceph-mon[75179]: pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:20 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:11:21 compute-0 sudo[242232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:11:21 compute-0 sudo[242232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:11:21 compute-0 sudo[242232]: pam_unix(sudo:session): session closed for user root
Feb 01 15:11:21 compute-0 sudo[242257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 01 15:11:21 compute-0 sudo[242257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:11:21 compute-0 ceph-mon[75179]: pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:22 compute-0 sudo[242257]: pam_unix(sudo:session): session closed for user root
Feb 01 15:11:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:11:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:11:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:11:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:11:22 compute-0 sudo[242302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:11:22 compute-0 sudo[242302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:11:22 compute-0 sudo[242302]: pam_unix(sudo:session): session closed for user root
Feb 01 15:11:22 compute-0 sudo[242327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 15:11:22 compute-0 sudo[242327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:11:22 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:22 compute-0 sudo[242327]: pam_unix(sudo:session): session closed for user root
Feb 01 15:11:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:11:22 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:11:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 15:11:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:11:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 15:11:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:11:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 15:11:22 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:11:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 15:11:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:11:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:11:22 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:11:22 compute-0 sudo[242383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:11:22 compute-0 sudo[242383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:11:22 compute-0 sudo[242383]: pam_unix(sudo:session): session closed for user root
Feb 01 15:11:22 compute-0 sudo[242408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 15:11:22 compute-0 sudo[242408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:11:22 compute-0 podman[242445]: 2026-02-01 15:11:22.962911588 +0000 UTC m=+0.050821668 container create d6d77084c45039005eff4129920845197e969f9b15144f47cb0bca6f8de3e660 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_haslett, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 01 15:11:23 compute-0 systemd[1]: Started libpod-conmon-d6d77084c45039005eff4129920845197e969f9b15144f47cb0bca6f8de3e660.scope.
Feb 01 15:11:23 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:11:23 compute-0 podman[242445]: 2026-02-01 15:11:22.935034115 +0000 UTC m=+0.022944265 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:11:23 compute-0 podman[242445]: 2026-02-01 15:11:23.034473266 +0000 UTC m=+0.122383386 container init d6d77084c45039005eff4129920845197e969f9b15144f47cb0bca6f8de3e660 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_haslett, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 01 15:11:23 compute-0 podman[242445]: 2026-02-01 15:11:23.039704973 +0000 UTC m=+0.127615063 container start d6d77084c45039005eff4129920845197e969f9b15144f47cb0bca6f8de3e660 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 01 15:11:23 compute-0 epic_haslett[242461]: 167 167
Feb 01 15:11:23 compute-0 systemd[1]: libpod-d6d77084c45039005eff4129920845197e969f9b15144f47cb0bca6f8de3e660.scope: Deactivated successfully.
Feb 01 15:11:23 compute-0 conmon[242461]: conmon d6d77084c45039005eff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d6d77084c45039005eff4129920845197e969f9b15144f47cb0bca6f8de3e660.scope/container/memory.events
Feb 01 15:11:23 compute-0 podman[242445]: 2026-02-01 15:11:23.045484896 +0000 UTC m=+0.133394946 container attach d6d77084c45039005eff4129920845197e969f9b15144f47cb0bca6f8de3e660 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 01 15:11:23 compute-0 podman[242445]: 2026-02-01 15:11:23.046531865 +0000 UTC m=+0.134441915 container died d6d77084c45039005eff4129920845197e969f9b15144f47cb0bca6f8de3e660 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:11:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-fefb93f468b8d9fbb66eef2b55b7c017071d3e1eaa15fa28fbd23f394822f2e0-merged.mount: Deactivated successfully.
Feb 01 15:11:23 compute-0 podman[242445]: 2026-02-01 15:11:23.091574619 +0000 UTC m=+0.179484699 container remove d6d77084c45039005eff4129920845197e969f9b15144f47cb0bca6f8de3e660 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_haslett, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:11:23 compute-0 systemd[1]: libpod-conmon-d6d77084c45039005eff4129920845197e969f9b15144f47cb0bca6f8de3e660.scope: Deactivated successfully.
Feb 01 15:11:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:11:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:11:23 compute-0 ceph-mon[75179]: pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:11:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:11:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:11:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:11:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:11:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:11:23 compute-0 podman[242486]: 2026-02-01 15:11:23.227817914 +0000 UTC m=+0.055487149 container create 22edb8115ce87b24f0fc10b809d1980657a1e3305eb9665f897fd4e701088b8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:11:23 compute-0 systemd[1]: Started libpod-conmon-22edb8115ce87b24f0fc10b809d1980657a1e3305eb9665f897fd4e701088b8c.scope.
Feb 01 15:11:23 compute-0 podman[242486]: 2026-02-01 15:11:23.205121977 +0000 UTC m=+0.032791262 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:11:23 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c21c373f9048fd0f16ac8496ed2e5bc0dd77932e70682966a2763f0933248898/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c21c373f9048fd0f16ac8496ed2e5bc0dd77932e70682966a2763f0933248898/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c21c373f9048fd0f16ac8496ed2e5bc0dd77932e70682966a2763f0933248898/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c21c373f9048fd0f16ac8496ed2e5bc0dd77932e70682966a2763f0933248898/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c21c373f9048fd0f16ac8496ed2e5bc0dd77932e70682966a2763f0933248898/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 15:11:23 compute-0 podman[242486]: 2026-02-01 15:11:23.343535272 +0000 UTC m=+0.171204527 container init 22edb8115ce87b24f0fc10b809d1980657a1e3305eb9665f897fd4e701088b8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_villani, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:11:23 compute-0 podman[242486]: 2026-02-01 15:11:23.361342742 +0000 UTC m=+0.189011977 container start 22edb8115ce87b24f0fc10b809d1980657a1e3305eb9665f897fd4e701088b8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_villani, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:11:23 compute-0 podman[242486]: 2026-02-01 15:11:23.367211246 +0000 UTC m=+0.194880491 container attach 22edb8115ce87b24f0fc10b809d1980657a1e3305eb9665f897fd4e701088b8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 01 15:11:23 compute-0 nice_villani[242502]: --> passed data devices: 0 physical, 3 LVM
Feb 01 15:11:23 compute-0 nice_villani[242502]: --> All data devices are unavailable
Feb 01 15:11:23 compute-0 systemd[1]: libpod-22edb8115ce87b24f0fc10b809d1980657a1e3305eb9665f897fd4e701088b8c.scope: Deactivated successfully.
Feb 01 15:11:23 compute-0 podman[242486]: 2026-02-01 15:11:23.878106306 +0000 UTC m=+0.705775581 container died 22edb8115ce87b24f0fc10b809d1980657a1e3305eb9665f897fd4e701088b8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_villani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:11:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c21c373f9048fd0f16ac8496ed2e5bc0dd77932e70682966a2763f0933248898-merged.mount: Deactivated successfully.
Feb 01 15:11:23 compute-0 podman[242486]: 2026-02-01 15:11:23.931457194 +0000 UTC m=+0.759126429 container remove 22edb8115ce87b24f0fc10b809d1980657a1e3305eb9665f897fd4e701088b8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 01 15:11:23 compute-0 systemd[1]: libpod-conmon-22edb8115ce87b24f0fc10b809d1980657a1e3305eb9665f897fd4e701088b8c.scope: Deactivated successfully.
Feb 01 15:11:23 compute-0 sudo[242408]: pam_unix(sudo:session): session closed for user root
Feb 01 15:11:24 compute-0 sudo[242534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:11:24 compute-0 sudo[242534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:11:24 compute-0 sudo[242534]: pam_unix(sudo:session): session closed for user root
Feb 01 15:11:24 compute-0 sudo[242559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 15:11:24 compute-0 sudo[242559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:11:24 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:24 compute-0 podman[242596]: 2026-02-01 15:11:24.377929097 +0000 UTC m=+0.090811991 container create 8ff584586133c3c036e521cca9c76d86f2aab54bf38e75383a3cee4a321e2e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 01 15:11:24 compute-0 podman[242596]: 2026-02-01 15:11:24.306978145 +0000 UTC m=+0.019861059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:11:24 compute-0 systemd[1]: Started libpod-conmon-8ff584586133c3c036e521cca9c76d86f2aab54bf38e75383a3cee4a321e2e15.scope.
Feb 01 15:11:24 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:11:24 compute-0 podman[242596]: 2026-02-01 15:11:24.515053036 +0000 UTC m=+0.227935950 container init 8ff584586133c3c036e521cca9c76d86f2aab54bf38e75383a3cee4a321e2e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 01 15:11:24 compute-0 podman[242596]: 2026-02-01 15:11:24.520505199 +0000 UTC m=+0.233388093 container start 8ff584586133c3c036e521cca9c76d86f2aab54bf38e75383a3cee4a321e2e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lovelace, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 01 15:11:24 compute-0 gallant_lovelace[242612]: 167 167
Feb 01 15:11:24 compute-0 systemd[1]: libpod-8ff584586133c3c036e521cca9c76d86f2aab54bf38e75383a3cee4a321e2e15.scope: Deactivated successfully.
Feb 01 15:11:24 compute-0 podman[242596]: 2026-02-01 15:11:24.549362239 +0000 UTC m=+0.262245103 container attach 8ff584586133c3c036e521cca9c76d86f2aab54bf38e75383a3cee4a321e2e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lovelace, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 01 15:11:24 compute-0 podman[242596]: 2026-02-01 15:11:24.549700128 +0000 UTC m=+0.262582992 container died 8ff584586133c3c036e521cca9c76d86f2aab54bf38e75383a3cee4a321e2e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lovelace, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:11:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-45fed254ad37dadc72cf335bf02ee3b15035bed87fa7bdb5fe75e648eef2a23e-merged.mount: Deactivated successfully.
Feb 01 15:11:24 compute-0 podman[242596]: 2026-02-01 15:11:24.594938668 +0000 UTC m=+0.307821552 container remove 8ff584586133c3c036e521cca9c76d86f2aab54bf38e75383a3cee4a321e2e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:11:24 compute-0 systemd[1]: libpod-conmon-8ff584586133c3c036e521cca9c76d86f2aab54bf38e75383a3cee4a321e2e15.scope: Deactivated successfully.
Feb 01 15:11:24 compute-0 podman[242638]: 2026-02-01 15:11:24.749357793 +0000 UTC m=+0.061097666 container create 079cca1be8be15f0f497221d86d997040a4d0473de3b958a45b9876d6a0f2571 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 01 15:11:24 compute-0 systemd[1]: Started libpod-conmon-079cca1be8be15f0f497221d86d997040a4d0473de3b958a45b9876d6a0f2571.scope.
Feb 01 15:11:24 compute-0 podman[242638]: 2026-02-01 15:11:24.724349751 +0000 UTC m=+0.036089674 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:11:24 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4509008f6ae4e1a135228a4d048847582e15bb69daf54daaf41d4e6d56ea0a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4509008f6ae4e1a135228a4d048847582e15bb69daf54daaf41d4e6d56ea0a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4509008f6ae4e1a135228a4d048847582e15bb69daf54daaf41d4e6d56ea0a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4509008f6ae4e1a135228a4d048847582e15bb69daf54daaf41d4e6d56ea0a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:11:24 compute-0 podman[242638]: 2026-02-01 15:11:24.885440923 +0000 UTC m=+0.197180876 container init 079cca1be8be15f0f497221d86d997040a4d0473de3b958a45b9876d6a0f2571 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_noether, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Feb 01 15:11:24 compute-0 podman[242638]: 2026-02-01 15:11:24.89533395 +0000 UTC m=+0.207073823 container start 079cca1be8be15f0f497221d86d997040a4d0473de3b958a45b9876d6a0f2571 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_noether, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 01 15:11:24 compute-0 podman[242638]: 2026-02-01 15:11:24.898827879 +0000 UTC m=+0.210567752 container attach 079cca1be8be15f0f497221d86d997040a4d0473de3b958a45b9876d6a0f2571 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_noether, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:11:25 compute-0 brave_noether[242654]: {
Feb 01 15:11:25 compute-0 brave_noether[242654]:     "0": [
Feb 01 15:11:25 compute-0 brave_noether[242654]:         {
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "devices": [
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "/dev/loop3"
Feb 01 15:11:25 compute-0 brave_noether[242654]:             ],
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "lv_name": "ceph_lv0",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "lv_size": "21470642176",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "name": "ceph_lv0",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "tags": {
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.cluster_name": "ceph",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.crush_device_class": "",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.encrypted": "0",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.objectstore": "bluestore",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.osd_id": "0",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.type": "block",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.vdo": "0",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.with_tpm": "0"
Feb 01 15:11:25 compute-0 brave_noether[242654]:             },
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "type": "block",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "vg_name": "ceph_vg0"
Feb 01 15:11:25 compute-0 brave_noether[242654]:         }
Feb 01 15:11:25 compute-0 brave_noether[242654]:     ],
Feb 01 15:11:25 compute-0 brave_noether[242654]:     "1": [
Feb 01 15:11:25 compute-0 brave_noether[242654]:         {
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "devices": [
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "/dev/loop4"
Feb 01 15:11:25 compute-0 brave_noether[242654]:             ],
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "lv_name": "ceph_lv1",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "lv_size": "21470642176",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "name": "ceph_lv1",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "tags": {
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.cluster_name": "ceph",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.crush_device_class": "",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.encrypted": "0",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.objectstore": "bluestore",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.osd_id": "1",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.type": "block",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.vdo": "0",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.with_tpm": "0"
Feb 01 15:11:25 compute-0 brave_noether[242654]:             },
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "type": "block",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "vg_name": "ceph_vg1"
Feb 01 15:11:25 compute-0 brave_noether[242654]:         }
Feb 01 15:11:25 compute-0 brave_noether[242654]:     ],
Feb 01 15:11:25 compute-0 brave_noether[242654]:     "2": [
Feb 01 15:11:25 compute-0 brave_noether[242654]:         {
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "devices": [
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "/dev/loop5"
Feb 01 15:11:25 compute-0 brave_noether[242654]:             ],
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "lv_name": "ceph_lv2",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "lv_size": "21470642176",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "name": "ceph_lv2",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "tags": {
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.cluster_name": "ceph",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.crush_device_class": "",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.encrypted": "0",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.objectstore": "bluestore",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.osd_id": "2",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.type": "block",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.vdo": "0",
Feb 01 15:11:25 compute-0 brave_noether[242654]:                 "ceph.with_tpm": "0"
Feb 01 15:11:25 compute-0 brave_noether[242654]:             },
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "type": "block",
Feb 01 15:11:25 compute-0 brave_noether[242654]:             "vg_name": "ceph_vg2"
Feb 01 15:11:25 compute-0 brave_noether[242654]:         }
Feb 01 15:11:25 compute-0 brave_noether[242654]:     ]
Feb 01 15:11:25 compute-0 brave_noether[242654]: }
Feb 01 15:11:25 compute-0 systemd[1]: libpod-079cca1be8be15f0f497221d86d997040a4d0473de3b958a45b9876d6a0f2571.scope: Deactivated successfully.
Feb 01 15:11:25 compute-0 podman[242638]: 2026-02-01 15:11:25.177370537 +0000 UTC m=+0.489110440 container died 079cca1be8be15f0f497221d86d997040a4d0473de3b958a45b9876d6a0f2571 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_noether, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 01 15:11:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4509008f6ae4e1a135228a4d048847582e15bb69daf54daaf41d4e6d56ea0a4-merged.mount: Deactivated successfully.
Feb 01 15:11:25 compute-0 podman[242638]: 2026-02-01 15:11:25.225439567 +0000 UTC m=+0.537179450 container remove 079cca1be8be15f0f497221d86d997040a4d0473de3b958a45b9876d6a0f2571 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:11:25 compute-0 systemd[1]: libpod-conmon-079cca1be8be15f0f497221d86d997040a4d0473de3b958a45b9876d6a0f2571.scope: Deactivated successfully.
Feb 01 15:11:25 compute-0 sudo[242559]: pam_unix(sudo:session): session closed for user root
Feb 01 15:11:25 compute-0 ceph-mon[75179]: pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:25 compute-0 sudo[242676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:11:25 compute-0 sudo[242676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:11:25 compute-0 sudo[242676]: pam_unix(sudo:session): session closed for user root
Feb 01 15:11:25 compute-0 sudo[242701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 15:11:25 compute-0 sudo[242701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:11:25 compute-0 podman[242740]: 2026-02-01 15:11:25.675792748 +0000 UTC m=+0.041622269 container create a0c3cb91ed4ac361203e0918d933ecfec4a64f9fdfb313f7fcac3d4f092f16c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_carver, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 01 15:11:25 compute-0 systemd[1]: Started libpod-conmon-a0c3cb91ed4ac361203e0918d933ecfec4a64f9fdfb313f7fcac3d4f092f16c7.scope.
Feb 01 15:11:25 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:11:25 compute-0 podman[242740]: 2026-02-01 15:11:25.659507181 +0000 UTC m=+0.025336732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:11:25 compute-0 podman[242740]: 2026-02-01 15:11:25.774660543 +0000 UTC m=+0.140490054 container init a0c3cb91ed4ac361203e0918d933ecfec4a64f9fdfb313f7fcac3d4f092f16c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_carver, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Feb 01 15:11:25 compute-0 podman[242740]: 2026-02-01 15:11:25.782142603 +0000 UTC m=+0.147972134 container start a0c3cb91ed4ac361203e0918d933ecfec4a64f9fdfb313f7fcac3d4f092f16c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_carver, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Feb 01 15:11:25 compute-0 elastic_carver[242757]: 167 167
Feb 01 15:11:25 compute-0 systemd[1]: libpod-a0c3cb91ed4ac361203e0918d933ecfec4a64f9fdfb313f7fcac3d4f092f16c7.scope: Deactivated successfully.
Feb 01 15:11:25 compute-0 conmon[242757]: conmon a0c3cb91ed4ac361203e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a0c3cb91ed4ac361203e0918d933ecfec4a64f9fdfb313f7fcac3d4f092f16c7.scope/container/memory.events
Feb 01 15:11:25 compute-0 podman[242740]: 2026-02-01 15:11:25.805589772 +0000 UTC m=+0.171419293 container attach a0c3cb91ed4ac361203e0918d933ecfec4a64f9fdfb313f7fcac3d4f092f16c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_carver, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:11:25 compute-0 podman[242740]: 2026-02-01 15:11:25.806090186 +0000 UTC m=+0.171919707 container died a0c3cb91ed4ac361203e0918d933ecfec4a64f9fdfb313f7fcac3d4f092f16c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_carver, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 01 15:11:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdb941e0555440a025aa4b9d2ddc26f13f0a0f015da126514d7d958d21166311-merged.mount: Deactivated successfully.
Feb 01 15:11:25 compute-0 podman[242740]: 2026-02-01 15:11:25.910766924 +0000 UTC m=+0.276596445 container remove a0c3cb91ed4ac361203e0918d933ecfec4a64f9fdfb313f7fcac3d4f092f16c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_carver, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:11:25 compute-0 systemd[1]: libpod-conmon-a0c3cb91ed4ac361203e0918d933ecfec4a64f9fdfb313f7fcac3d4f092f16c7.scope: Deactivated successfully.
Feb 01 15:11:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:11:26 compute-0 podman[242783]: 2026-02-01 15:11:26.07379305 +0000 UTC m=+0.052361471 container create 6249f39d785f4cc67a0dee5d94e83bc099f3135b312b384fdb1e936aa1e5bcf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_clarke, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Feb 01 15:11:26 compute-0 systemd[1]: Started libpod-conmon-6249f39d785f4cc67a0dee5d94e83bc099f3135b312b384fdb1e936aa1e5bcf7.scope.
Feb 01 15:11:26 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:11:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8424d85fb047a35f23da6223a6367d80c27719a965f4108d614416d554a48cd0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:11:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8424d85fb047a35f23da6223a6367d80c27719a965f4108d614416d554a48cd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:11:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8424d85fb047a35f23da6223a6367d80c27719a965f4108d614416d554a48cd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:11:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8424d85fb047a35f23da6223a6367d80c27719a965f4108d614416d554a48cd0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:11:26 compute-0 podman[242783]: 2026-02-01 15:11:26.044965011 +0000 UTC m=+0.023533492 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:11:26 compute-0 podman[242783]: 2026-02-01 15:11:26.157119179 +0000 UTC m=+0.135687640 container init 6249f39d785f4cc67a0dee5d94e83bc099f3135b312b384fdb1e936aa1e5bcf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 01 15:11:26 compute-0 podman[242783]: 2026-02-01 15:11:26.168853928 +0000 UTC m=+0.147422329 container start 6249f39d785f4cc67a0dee5d94e83bc099f3135b312b384fdb1e936aa1e5bcf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_clarke, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 01 15:11:26 compute-0 podman[242783]: 2026-02-01 15:11:26.172443919 +0000 UTC m=+0.151012390 container attach 6249f39d785f4cc67a0dee5d94e83bc099f3135b312b384fdb1e936aa1e5bcf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:11:26 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:26 compute-0 lvm[242879]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:11:26 compute-0 lvm[242877]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:11:26 compute-0 lvm[242877]: VG ceph_vg0 finished
Feb 01 15:11:26 compute-0 lvm[242879]: VG ceph_vg1 finished
Feb 01 15:11:26 compute-0 lvm[242881]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:11:26 compute-0 lvm[242881]: VG ceph_vg2 finished
Feb 01 15:11:26 compute-0 practical_clarke[242799]: {}
Feb 01 15:11:26 compute-0 systemd[1]: libpod-6249f39d785f4cc67a0dee5d94e83bc099f3135b312b384fdb1e936aa1e5bcf7.scope: Deactivated successfully.
Feb 01 15:11:26 compute-0 podman[242783]: 2026-02-01 15:11:26.909848918 +0000 UTC m=+0.888417309 container died 6249f39d785f4cc67a0dee5d94e83bc099f3135b312b384fdb1e936aa1e5bcf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_clarke, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 01 15:11:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-8424d85fb047a35f23da6223a6367d80c27719a965f4108d614416d554a48cd0-merged.mount: Deactivated successfully.
Feb 01 15:11:27 compute-0 ceph-mon[75179]: pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:27 compute-0 podman[242783]: 2026-02-01 15:11:27.41615558 +0000 UTC m=+1.394723991 container remove 6249f39d785f4cc67a0dee5d94e83bc099f3135b312b384fdb1e936aa1e5bcf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 01 15:11:27 compute-0 sudo[242701]: pam_unix(sudo:session): session closed for user root
Feb 01 15:11:27 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:11:27 compute-0 systemd[1]: libpod-conmon-6249f39d785f4cc67a0dee5d94e83bc099f3135b312b384fdb1e936aa1e5bcf7.scope: Deactivated successfully.
Feb 01 15:11:27 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:11:27 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:11:27 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:11:27 compute-0 sudo[242898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 15:11:27 compute-0 sudo[242898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:11:27 compute-0 sudo[242898]: pam_unix(sudo:session): session closed for user root
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:11:28 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:28 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:11:28 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:11:29 compute-0 ceph-mon[75179]: pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:30 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:11:32 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:32 compute-0 ceph-mon[75179]: pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:33 compute-0 ceph-mon[75179]: pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:34 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:35 compute-0 ceph-mon[75179]: pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:11:36 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:37 compute-0 ceph-mon[75179]: pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:38 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:39 compute-0 ceph-mon[75179]: pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:40 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:40 compute-0 nova_compute[238794]: 2026-02-01 15:11:40.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:11:40 compute-0 nova_compute[238794]: 2026-02-01 15:11:40.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 01 15:11:40 compute-0 nova_compute[238794]: 2026-02-01 15:11:40.437 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 01 15:11:40 compute-0 nova_compute[238794]: 2026-02-01 15:11:40.439 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:11:40 compute-0 nova_compute[238794]: 2026-02-01 15:11:40.440 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 01 15:11:40 compute-0 nova_compute[238794]: 2026-02-01 15:11:40.634 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:11:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:11:41 compute-0 ceph-mon[75179]: pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:42 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:43 compute-0 ceph-mon[75179]: pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:43 compute-0 nova_compute[238794]: 2026-02-01 15:11:43.924 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:11:44 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:44 compute-0 nova_compute[238794]: 2026-02-01 15:11:44.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:11:44 compute-0 nova_compute[238794]: 2026-02-01 15:11:44.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 01 15:11:44 compute-0 nova_compute[238794]: 2026-02-01 15:11:44.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 01 15:11:44 compute-0 nova_compute[238794]: 2026-02-01 15:11:44.400 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 01 15:11:44 compute-0 nova_compute[238794]: 2026-02-01 15:11:44.401 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:11:44 compute-0 nova_compute[238794]: 2026-02-01 15:11:44.401 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 01 15:11:45 compute-0 nova_compute[238794]: 2026-02-01 15:11:45.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:11:45 compute-0 nova_compute[238794]: 2026-02-01 15:11:45.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:11:45 compute-0 ceph-mon[75179]: pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:11:46 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:46 compute-0 nova_compute[238794]: 2026-02-01 15:11:46.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:11:46 compute-0 nova_compute[238794]: 2026-02-01 15:11:46.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:11:46 compute-0 nova_compute[238794]: 2026-02-01 15:11:46.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:11:46 compute-0 nova_compute[238794]: 2026-02-01 15:11:46.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:11:46 compute-0 nova_compute[238794]: 2026-02-01 15:11:46.352 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:11:46 compute-0 nova_compute[238794]: 2026-02-01 15:11:46.352 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:11:46 compute-0 nova_compute[238794]: 2026-02-01 15:11:46.353 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:11:46 compute-0 nova_compute[238794]: 2026-02-01 15:11:46.353 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 01 15:11:46 compute-0 nova_compute[238794]: 2026-02-01 15:11:46.353 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:11:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:11:46 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2765978889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:11:46 compute-0 nova_compute[238794]: 2026-02-01 15:11:46.851 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:11:46 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2765978889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:11:46 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Feb 01 15:11:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:46.936884) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 01 15:11:46 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Feb 01 15:11:46 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958706936935, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1385, "num_deletes": 251, "total_data_size": 2229066, "memory_usage": 2272792, "flush_reason": "Manual Compaction"}
Feb 01 15:11:46 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Feb 01 15:11:46 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958706949313, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2186608, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14959, "largest_seqno": 16343, "table_properties": {"data_size": 2180092, "index_size": 3715, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13372, "raw_average_key_size": 19, "raw_value_size": 2167047, "raw_average_value_size": 3182, "num_data_blocks": 170, "num_entries": 681, "num_filter_entries": 681, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769958561, "oldest_key_time": 1769958561, "file_creation_time": 1769958706, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:11:46 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 12466 microseconds, and 3574 cpu microseconds.
Feb 01 15:11:46 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:11:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:46.949357) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2186608 bytes OK
Feb 01 15:11:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:46.949374) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Feb 01 15:11:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:46.951115) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Feb 01 15:11:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:46.951132) EVENT_LOG_v1 {"time_micros": 1769958706951127, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 01 15:11:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:46.951150) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 01 15:11:46 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2222912, prev total WAL file size 2222912, number of live WAL files 2.
Feb 01 15:11:46 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:11:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:46.951722) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Feb 01 15:11:46 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 01 15:11:46 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2135KB)], [35(7304KB)]
Feb 01 15:11:46 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958706951800, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9666109, "oldest_snapshot_seqno": -1}
Feb 01 15:11:47 compute-0 nova_compute[238794]: 2026-02-01 15:11:47.011 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 01 15:11:47 compute-0 nova_compute[238794]: 2026-02-01 15:11:47.012 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5131MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 01 15:11:47 compute-0 nova_compute[238794]: 2026-02-01 15:11:47.012 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:11:47 compute-0 nova_compute[238794]: 2026-02-01 15:11:47.012 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:11:47 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4017 keys, 7857216 bytes, temperature: kUnknown
Feb 01 15:11:47 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958707020694, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7857216, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7828214, "index_size": 17884, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 98110, "raw_average_key_size": 24, "raw_value_size": 7753428, "raw_average_value_size": 1930, "num_data_blocks": 757, "num_entries": 4017, "num_filter_entries": 4017, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769958706, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:11:47 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:11:47 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:47.020962) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7857216 bytes
Feb 01 15:11:47 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:47.022469) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.2 rd, 113.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.1 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(8.0) write-amplify(3.6) OK, records in: 4531, records dropped: 514 output_compression: NoCompression
Feb 01 15:11:47 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:47.022488) EVENT_LOG_v1 {"time_micros": 1769958707022479, "job": 16, "event": "compaction_finished", "compaction_time_micros": 68961, "compaction_time_cpu_micros": 27948, "output_level": 6, "num_output_files": 1, "total_output_size": 7857216, "num_input_records": 4531, "num_output_records": 4017, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 01 15:11:47 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:11:47 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958707022785, "job": 16, "event": "table_file_deletion", "file_number": 37}
Feb 01 15:11:47 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:11:47 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958707023433, "job": 16, "event": "table_file_deletion", "file_number": 35}
Feb 01 15:11:47 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:46.951621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:11:47 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:47.023503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:11:47 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:47.023510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:11:47 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:47.023512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:11:47 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:47.023514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:11:47 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:11:47.023516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:11:47 compute-0 nova_compute[238794]: 2026-02-01 15:11:47.270 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 01 15:11:47 compute-0 nova_compute[238794]: 2026-02-01 15:11:47.270 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 01 15:11:47 compute-0 nova_compute[238794]: 2026-02-01 15:11:47.389 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Refreshing inventories for resource provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 01 15:11:47 compute-0 nova_compute[238794]: 2026-02-01 15:11:47.502 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Updating ProviderTree inventory for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 01 15:11:47 compute-0 nova_compute[238794]: 2026-02-01 15:11:47.502 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Updating inventory in ProviderTree for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 01 15:11:47 compute-0 nova_compute[238794]: 2026-02-01 15:11:47.521 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Refreshing aggregate associations for resource provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 01 15:11:47 compute-0 nova_compute[238794]: 2026-02-01 15:11:47.541 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Refreshing trait associations for resource provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18, traits: COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX2,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE42,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 01 15:11:47 compute-0 nova_compute[238794]: 2026-02-01 15:11:47.556 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:11:47 compute-0 ceph-mon[75179]: pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:48 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:11:48 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3519629576' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:11:48 compute-0 nova_compute[238794]: 2026-02-01 15:11:48.069 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:11:48 compute-0 nova_compute[238794]: 2026-02-01 15:11:48.075 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 01 15:11:48 compute-0 nova_compute[238794]: 2026-02-01 15:11:48.093 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 01 15:11:48 compute-0 nova_compute[238794]: 2026-02-01 15:11:48.096 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 01 15:11:48 compute-0 nova_compute[238794]: 2026-02-01 15:11:48.097 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:11:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:11:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:11:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:11:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:11:48 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3519629576' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:11:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:11:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:11:49 compute-0 podman[242968]: 2026-02-01 15:11:49.983886605 +0000 UTC m=+0.068646568 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 01 15:11:49 compute-0 podman[242967]: 2026-02-01 15:11:49.988857324 +0000 UTC m=+0.073466493 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 01 15:11:49 compute-0 ceph-mon[75179]: pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:50 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 01 15:11:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2270999465' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:11:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 01 15:11:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2270999465' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:11:50 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/2270999465' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:11:50 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/2270999465' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:11:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:11:52 compute-0 ceph-mon[75179]: pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:52 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:54 compute-0 ceph-mon[75179]: pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:54 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:56 compute-0 ceph-mon[75179]: pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:11:56 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:58 compute-0 ceph-mon[75179]: pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:58 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:11:59 compute-0 ceph-mon[75179]: pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:00 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:12:01 compute-0 ceph-mon[75179]: pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:02 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:12:03 compute-0 ceph-mon[75179]: pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:12:04 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:12:05 compute-0 ceph-mon[75179]: pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:12:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:12:06 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:12:07 compute-0 ceph-mon[75179]: pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:12:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:12:07.807 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:12:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:12:07.808 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:12:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:12:07.808 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:12:08 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:12:09 compute-0 ceph-mon[75179]: pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:12:10 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:12:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:12:11 compute-0 ceph-mon[75179]: pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:12:12 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:12:13 compute-0 ceph-mon[75179]: pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 01 15:12:14 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:15 compute-0 ceph-mon[75179]: pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:12:16 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:17 compute-0 ceph-mon[75179]: pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:12:17
Feb 01 15:12:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:12:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:12:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'vms', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'images', 'default.rgw.control']
Feb 01 15:12:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:12:18 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:12:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:12:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:12:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:12:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:12:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:12:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:12:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:12:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:12:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:12:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:12:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:12:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:12:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:12:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:12:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:12:19 compute-0 ceph-mon[75179]: pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:20 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:20 compute-0 podman[243009]: 2026-02-01 15:12:20.997148413 +0000 UTC m=+0.076759966 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Feb 01 15:12:21 compute-0 podman[243010]: 2026-02-01 15:12:21.033579485 +0000 UTC m=+0.108592039 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 01 15:12:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:12:21 compute-0 ceph-mon[75179]: pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:22 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:23 compute-0 ceph-mon[75179]: pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:24 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:25 compute-0 ceph-mon[75179]: pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:12:26 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:27 compute-0 ceph-mon[75179]: pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:27 compute-0 sudo[243054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:12:27 compute-0 sudo[243054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:12:27 compute-0 sudo[243054]: pam_unix(sudo:session): session closed for user root
Feb 01 15:12:27 compute-0 sudo[243079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 15:12:27 compute-0 sudo[243079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:12:28 compute-0 sudo[243079]: pam_unix(sudo:session): session closed for user root
Feb 01 15:12:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 01 15:12:28 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 01 15:12:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:12:28 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:12:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 15:12:28 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:12:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 15:12:28 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:12:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 15:12:28 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:12:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 15:12:28 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:12:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:12:28 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:12:28 compute-0 sudo[243136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:12:28 compute-0 sudo[243136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:12:28 compute-0 sudo[243136]: pam_unix(sudo:session): session closed for user root
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:12:28 compute-0 sudo[243161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.1352722611116497e-06 of space, bias 4.0, pg target 0.0025623267133339797 quantized to 16 (current 16)
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:12:28 compute-0 sudo[243161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:12:28 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:28 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 01 15:12:28 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:12:28 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:12:28 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:12:28 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:12:28 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:12:28 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:12:28 compute-0 podman[243198]: 2026-02-01 15:12:28.481395106 +0000 UTC m=+0.031798774 container create c64bba6fcbd184d2374e6824e55814e68aa2f41c30a7d1cf4a309fb33b5bc249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_noyce, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:12:28 compute-0 systemd[1]: Started libpod-conmon-c64bba6fcbd184d2374e6824e55814e68aa2f41c30a7d1cf4a309fb33b5bc249.scope.
Feb 01 15:12:28 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:12:28 compute-0 podman[243198]: 2026-02-01 15:12:28.561193256 +0000 UTC m=+0.111596964 container init c64bba6fcbd184d2374e6824e55814e68aa2f41c30a7d1cf4a309fb33b5bc249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Feb 01 15:12:28 compute-0 podman[243198]: 2026-02-01 15:12:28.466206039 +0000 UTC m=+0.016609737 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:12:28 compute-0 podman[243198]: 2026-02-01 15:12:28.565380953 +0000 UTC m=+0.115784611 container start c64bba6fcbd184d2374e6824e55814e68aa2f41c30a7d1cf4a309fb33b5bc249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_noyce, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 01 15:12:28 compute-0 podman[243198]: 2026-02-01 15:12:28.568417798 +0000 UTC m=+0.118821476 container attach c64bba6fcbd184d2374e6824e55814e68aa2f41c30a7d1cf4a309fb33b5bc249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_noyce, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:12:28 compute-0 systemd[1]: libpod-c64bba6fcbd184d2374e6824e55814e68aa2f41c30a7d1cf4a309fb33b5bc249.scope: Deactivated successfully.
Feb 01 15:12:28 compute-0 cool_noyce[243215]: 167 167
Feb 01 15:12:28 compute-0 conmon[243215]: conmon c64bba6fcbd184d2374e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c64bba6fcbd184d2374e6824e55814e68aa2f41c30a7d1cf4a309fb33b5bc249.scope/container/memory.events
Feb 01 15:12:28 compute-0 podman[243198]: 2026-02-01 15:12:28.571776293 +0000 UTC m=+0.122179961 container died c64bba6fcbd184d2374e6824e55814e68aa2f41c30a7d1cf4a309fb33b5bc249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_noyce, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 01 15:12:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bc8d1b06e0f13fa46d62b009f306baed635bafc13d64fb1eefa8487804de3ef-merged.mount: Deactivated successfully.
Feb 01 15:12:28 compute-0 podman[243198]: 2026-02-01 15:12:28.610244682 +0000 UTC m=+0.160648360 container remove c64bba6fcbd184d2374e6824e55814e68aa2f41c30a7d1cf4a309fb33b5bc249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_noyce, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:12:28 compute-0 systemd[1]: libpod-conmon-c64bba6fcbd184d2374e6824e55814e68aa2f41c30a7d1cf4a309fb33b5bc249.scope: Deactivated successfully.
Feb 01 15:12:28 compute-0 podman[243238]: 2026-02-01 15:12:28.769283087 +0000 UTC m=+0.079616526 container create 30a954e1fd7d37282549912617a29405ee36838abf114ddbfeb4c5ef9a1aee4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:12:28 compute-0 systemd[1]: Started libpod-conmon-30a954e1fd7d37282549912617a29405ee36838abf114ddbfeb4c5ef9a1aee4d.scope.
Feb 01 15:12:28 compute-0 podman[243238]: 2026-02-01 15:12:28.712977276 +0000 UTC m=+0.023310725 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:12:28 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/365b64fd99745d8018c123ca44f01eb558cf3952c19fe7f734b3a91f190b8368/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/365b64fd99745d8018c123ca44f01eb558cf3952c19fe7f734b3a91f190b8368/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/365b64fd99745d8018c123ca44f01eb558cf3952c19fe7f734b3a91f190b8368/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/365b64fd99745d8018c123ca44f01eb558cf3952c19fe7f734b3a91f190b8368/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/365b64fd99745d8018c123ca44f01eb558cf3952c19fe7f734b3a91f190b8368/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 15:12:28 compute-0 podman[243238]: 2026-02-01 15:12:28.869349185 +0000 UTC m=+0.179682614 container init 30a954e1fd7d37282549912617a29405ee36838abf114ddbfeb4c5ef9a1aee4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mclaren, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Feb 01 15:12:28 compute-0 podman[243238]: 2026-02-01 15:12:28.876671831 +0000 UTC m=+0.187005250 container start 30a954e1fd7d37282549912617a29405ee36838abf114ddbfeb4c5ef9a1aee4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mclaren, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Feb 01 15:12:28 compute-0 podman[243238]: 2026-02-01 15:12:28.880622752 +0000 UTC m=+0.190956171 container attach 30a954e1fd7d37282549912617a29405ee36838abf114ddbfeb4c5ef9a1aee4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:12:29 compute-0 youthful_mclaren[243254]: --> passed data devices: 0 physical, 3 LVM
Feb 01 15:12:29 compute-0 youthful_mclaren[243254]: --> All data devices are unavailable
Feb 01 15:12:29 compute-0 systemd[1]: libpod-30a954e1fd7d37282549912617a29405ee36838abf114ddbfeb4c5ef9a1aee4d.scope: Deactivated successfully.
Feb 01 15:12:29 compute-0 podman[243238]: 2026-02-01 15:12:29.276241117 +0000 UTC m=+0.586574516 container died 30a954e1fd7d37282549912617a29405ee36838abf114ddbfeb4c5ef9a1aee4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mclaren, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 01 15:12:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-365b64fd99745d8018c123ca44f01eb558cf3952c19fe7f734b3a91f190b8368-merged.mount: Deactivated successfully.
Feb 01 15:12:29 compute-0 podman[243238]: 2026-02-01 15:12:29.317772303 +0000 UTC m=+0.628105732 container remove 30a954e1fd7d37282549912617a29405ee36838abf114ddbfeb4c5ef9a1aee4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 01 15:12:29 compute-0 systemd[1]: libpod-conmon-30a954e1fd7d37282549912617a29405ee36838abf114ddbfeb4c5ef9a1aee4d.scope: Deactivated successfully.
Feb 01 15:12:29 compute-0 sudo[243161]: pam_unix(sudo:session): session closed for user root
Feb 01 15:12:29 compute-0 ceph-mon[75179]: pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:29 compute-0 sudo[243287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:12:29 compute-0 sudo[243287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:12:29 compute-0 sudo[243287]: pam_unix(sudo:session): session closed for user root
Feb 01 15:12:29 compute-0 sudo[243312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 15:12:29 compute-0 sudo[243312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:12:29 compute-0 podman[243350]: 2026-02-01 15:12:29.829723213 +0000 UTC m=+0.070331325 container create dabeec8e2d7f5b84929cb58adb0ddcc3ff150e4b31a1ad1e7c25b994fcb9dd50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_leakey, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:12:29 compute-0 systemd[1]: Started libpod-conmon-dabeec8e2d7f5b84929cb58adb0ddcc3ff150e4b31a1ad1e7c25b994fcb9dd50.scope.
Feb 01 15:12:29 compute-0 podman[243350]: 2026-02-01 15:12:29.779450802 +0000 UTC m=+0.020058974 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:12:29 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:12:29 compute-0 podman[243350]: 2026-02-01 15:12:29.910007537 +0000 UTC m=+0.150615659 container init dabeec8e2d7f5b84929cb58adb0ddcc3ff150e4b31a1ad1e7c25b994fcb9dd50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_leakey, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 01 15:12:29 compute-0 podman[243350]: 2026-02-01 15:12:29.919097942 +0000 UTC m=+0.159706054 container start dabeec8e2d7f5b84929cb58adb0ddcc3ff150e4b31a1ad1e7c25b994fcb9dd50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_leakey, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:12:29 compute-0 awesome_leakey[243367]: 167 167
Feb 01 15:12:29 compute-0 podman[243350]: 2026-02-01 15:12:29.92435702 +0000 UTC m=+0.164965142 container attach dabeec8e2d7f5b84929cb58adb0ddcc3ff150e4b31a1ad1e7c25b994fcb9dd50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_leakey, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Feb 01 15:12:29 compute-0 systemd[1]: libpod-dabeec8e2d7f5b84929cb58adb0ddcc3ff150e4b31a1ad1e7c25b994fcb9dd50.scope: Deactivated successfully.
Feb 01 15:12:29 compute-0 conmon[243367]: conmon dabeec8e2d7f5b84929c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dabeec8e2d7f5b84929cb58adb0ddcc3ff150e4b31a1ad1e7c25b994fcb9dd50.scope/container/memory.events
Feb 01 15:12:29 compute-0 podman[243350]: 2026-02-01 15:12:29.925585164 +0000 UTC m=+0.166193286 container died dabeec8e2d7f5b84929cb58adb0ddcc3ff150e4b31a1ad1e7c25b994fcb9dd50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Feb 01 15:12:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-51c11c545c082386c80a42ba418b2d0546e70709e7acb3136ec469c9f88848a4-merged.mount: Deactivated successfully.
Feb 01 15:12:30 compute-0 podman[243350]: 2026-02-01 15:12:30.01342894 +0000 UTC m=+0.254037032 container remove dabeec8e2d7f5b84929cb58adb0ddcc3ff150e4b31a1ad1e7c25b994fcb9dd50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 01 15:12:30 compute-0 systemd[1]: libpod-conmon-dabeec8e2d7f5b84929cb58adb0ddcc3ff150e4b31a1ad1e7c25b994fcb9dd50.scope: Deactivated successfully.
Feb 01 15:12:30 compute-0 podman[243391]: 2026-02-01 15:12:30.174863462 +0000 UTC m=+0.054070819 container create 62be9a6c22da83313e42c72bbec941716c581441cbbffe3db01d5ce9a8e97084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_almeida, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 01 15:12:30 compute-0 systemd[1]: Started libpod-conmon-62be9a6c22da83313e42c72bbec941716c581441cbbffe3db01d5ce9a8e97084.scope.
Feb 01 15:12:30 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567919ff856346042910cc02a6bbfdd7f6848bbfcefb2d261fe39cffce8751bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567919ff856346042910cc02a6bbfdd7f6848bbfcefb2d261fe39cffce8751bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567919ff856346042910cc02a6bbfdd7f6848bbfcefb2d261fe39cffce8751bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567919ff856346042910cc02a6bbfdd7f6848bbfcefb2d261fe39cffce8751bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:12:30 compute-0 podman[243391]: 2026-02-01 15:12:30.14665943 +0000 UTC m=+0.025866837 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:12:30 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:30 compute-0 podman[243391]: 2026-02-01 15:12:30.259531598 +0000 UTC m=+0.138738985 container init 62be9a6c22da83313e42c72bbec941716c581441cbbffe3db01d5ce9a8e97084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_almeida, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:12:30 compute-0 podman[243391]: 2026-02-01 15:12:30.268569822 +0000 UTC m=+0.147777169 container start 62be9a6c22da83313e42c72bbec941716c581441cbbffe3db01d5ce9a8e97084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:12:30 compute-0 podman[243391]: 2026-02-01 15:12:30.272488082 +0000 UTC m=+0.151695399 container attach 62be9a6c22da83313e42c72bbec941716c581441cbbffe3db01d5ce9a8e97084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]: {
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:     "0": [
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:         {
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "devices": [
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "/dev/loop3"
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             ],
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "lv_name": "ceph_lv0",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "lv_size": "21470642176",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "name": "ceph_lv0",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "tags": {
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.cluster_name": "ceph",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.crush_device_class": "",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.encrypted": "0",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.objectstore": "bluestore",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.osd_id": "0",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.type": "block",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.vdo": "0",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.with_tpm": "0"
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             },
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "type": "block",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "vg_name": "ceph_vg0"
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:         }
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:     ],
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:     "1": [
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:         {
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "devices": [
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "/dev/loop4"
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             ],
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "lv_name": "ceph_lv1",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "lv_size": "21470642176",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "name": "ceph_lv1",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "tags": {
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.cluster_name": "ceph",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.crush_device_class": "",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.encrypted": "0",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.objectstore": "bluestore",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.osd_id": "1",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.type": "block",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.vdo": "0",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.with_tpm": "0"
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             },
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "type": "block",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "vg_name": "ceph_vg1"
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:         }
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:     ],
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:     "2": [
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:         {
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "devices": [
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "/dev/loop5"
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             ],
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "lv_name": "ceph_lv2",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "lv_size": "21470642176",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "name": "ceph_lv2",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "tags": {
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.cluster_name": "ceph",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.crush_device_class": "",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.encrypted": "0",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.objectstore": "bluestore",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.osd_id": "2",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.type": "block",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.vdo": "0",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:                 "ceph.with_tpm": "0"
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             },
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "type": "block",
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:             "vg_name": "ceph_vg2"
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:         }
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]:     ]
Feb 01 15:12:30 compute-0 beautiful_almeida[243408]: }
Feb 01 15:12:30 compute-0 systemd[1]: libpod-62be9a6c22da83313e42c72bbec941716c581441cbbffe3db01d5ce9a8e97084.scope: Deactivated successfully.
Feb 01 15:12:30 compute-0 podman[243391]: 2026-02-01 15:12:30.568286065 +0000 UTC m=+0.447493382 container died 62be9a6c22da83313e42c72bbec941716c581441cbbffe3db01d5ce9a8e97084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_almeida, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:12:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-567919ff856346042910cc02a6bbfdd7f6848bbfcefb2d261fe39cffce8751bf-merged.mount: Deactivated successfully.
Feb 01 15:12:30 compute-0 podman[243391]: 2026-02-01 15:12:30.606151448 +0000 UTC m=+0.485358765 container remove 62be9a6c22da83313e42c72bbec941716c581441cbbffe3db01d5ce9a8e97084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_almeida, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 01 15:12:30 compute-0 systemd[1]: libpod-conmon-62be9a6c22da83313e42c72bbec941716c581441cbbffe3db01d5ce9a8e97084.scope: Deactivated successfully.
Feb 01 15:12:30 compute-0 sudo[243312]: pam_unix(sudo:session): session closed for user root
Feb 01 15:12:30 compute-0 sudo[243429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:12:30 compute-0 sudo[243429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:12:30 compute-0 sudo[243429]: pam_unix(sudo:session): session closed for user root
Feb 01 15:12:30 compute-0 sudo[243454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 15:12:30 compute-0 sudo[243454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:12:31 compute-0 podman[243490]: 2026-02-01 15:12:31.028941276 +0000 UTC m=+0.037812833 container create f6c54a5f6412e45353b0dad3db95acb19d9d7f22fc4456c66cac86d2b1f49560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mcclintock, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 01 15:12:31 compute-0 systemd[1]: Started libpod-conmon-f6c54a5f6412e45353b0dad3db95acb19d9d7f22fc4456c66cac86d2b1f49560.scope.
Feb 01 15:12:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:12:31 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:12:31 compute-0 podman[243490]: 2026-02-01 15:12:31.013360048 +0000 UTC m=+0.022231615 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:12:31 compute-0 podman[243490]: 2026-02-01 15:12:31.113092578 +0000 UTC m=+0.121964135 container init f6c54a5f6412e45353b0dad3db95acb19d9d7f22fc4456c66cac86d2b1f49560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:12:31 compute-0 podman[243490]: 2026-02-01 15:12:31.117396079 +0000 UTC m=+0.126267606 container start f6c54a5f6412e45353b0dad3db95acb19d9d7f22fc4456c66cac86d2b1f49560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 01 15:12:31 compute-0 podman[243490]: 2026-02-01 15:12:31.120260169 +0000 UTC m=+0.129131746 container attach f6c54a5f6412e45353b0dad3db95acb19d9d7f22fc4456c66cac86d2b1f49560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mcclintock, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:12:31 compute-0 exciting_mcclintock[243506]: 167 167
Feb 01 15:12:31 compute-0 systemd[1]: libpod-f6c54a5f6412e45353b0dad3db95acb19d9d7f22fc4456c66cac86d2b1f49560.scope: Deactivated successfully.
Feb 01 15:12:31 compute-0 podman[243490]: 2026-02-01 15:12:31.121060701 +0000 UTC m=+0.129932228 container died f6c54a5f6412e45353b0dad3db95acb19d9d7f22fc4456c66cac86d2b1f49560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:12:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-81562602b2ff7881f79a8f69a17e6f55e696f1c3419f324da2ad513a02b0f967-merged.mount: Deactivated successfully.
Feb 01 15:12:31 compute-0 podman[243490]: 2026-02-01 15:12:31.155460267 +0000 UTC m=+0.164331794 container remove f6c54a5f6412e45353b0dad3db95acb19d9d7f22fc4456c66cac86d2b1f49560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mcclintock, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:12:31 compute-0 systemd[1]: libpod-conmon-f6c54a5f6412e45353b0dad3db95acb19d9d7f22fc4456c66cac86d2b1f49560.scope: Deactivated successfully.
Feb 01 15:12:31 compute-0 podman[243530]: 2026-02-01 15:12:31.325925522 +0000 UTC m=+0.042485943 container create 15b0266cf9734a032a6123eec06e512d123ec79c12df13f292f88d9423aca495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goldwasser, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:12:31 compute-0 systemd[1]: Started libpod-conmon-15b0266cf9734a032a6123eec06e512d123ec79c12df13f292f88d9423aca495.scope.
Feb 01 15:12:31 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:12:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1790bf49b134e0eb8bf4fb05f7caf6844475576b384e2e2e735cde68636a5ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:12:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1790bf49b134e0eb8bf4fb05f7caf6844475576b384e2e2e735cde68636a5ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:12:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1790bf49b134e0eb8bf4fb05f7caf6844475576b384e2e2e735cde68636a5ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:12:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1790bf49b134e0eb8bf4fb05f7caf6844475576b384e2e2e735cde68636a5ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:12:31 compute-0 podman[243530]: 2026-02-01 15:12:31.30983076 +0000 UTC m=+0.026391231 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:12:31 compute-0 ceph-mon[75179]: pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:31 compute-0 podman[243530]: 2026-02-01 15:12:31.429106018 +0000 UTC m=+0.145666469 container init 15b0266cf9734a032a6123eec06e512d123ec79c12df13f292f88d9423aca495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 01 15:12:31 compute-0 podman[243530]: 2026-02-01 15:12:31.436718472 +0000 UTC m=+0.153278893 container start 15b0266cf9734a032a6123eec06e512d123ec79c12df13f292f88d9423aca495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goldwasser, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:12:31 compute-0 podman[243530]: 2026-02-01 15:12:31.440117027 +0000 UTC m=+0.156677448 container attach 15b0266cf9734a032a6123eec06e512d123ec79c12df13f292f88d9423aca495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 01 15:12:32 compute-0 lvm[243625]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:12:32 compute-0 lvm[243625]: VG ceph_vg1 finished
Feb 01 15:12:32 compute-0 lvm[243622]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:12:32 compute-0 lvm[243622]: VG ceph_vg0 finished
Feb 01 15:12:32 compute-0 lvm[243627]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:12:32 compute-0 lvm[243627]: VG ceph_vg2 finished
Feb 01 15:12:32 compute-0 lvm[243628]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:12:32 compute-0 lvm[243628]: VG ceph_vg1 finished
Feb 01 15:12:32 compute-0 hungry_goldwasser[243546]: {}
Feb 01 15:12:32 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:32 compute-0 systemd[1]: libpod-15b0266cf9734a032a6123eec06e512d123ec79c12df13f292f88d9423aca495.scope: Deactivated successfully.
Feb 01 15:12:32 compute-0 systemd[1]: libpod-15b0266cf9734a032a6123eec06e512d123ec79c12df13f292f88d9423aca495.scope: Consumed 1.150s CPU time.
Feb 01 15:12:32 compute-0 podman[243530]: 2026-02-01 15:12:32.271959696 +0000 UTC m=+0.988520137 container died 15b0266cf9734a032a6123eec06e512d123ec79c12df13f292f88d9423aca495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:12:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1790bf49b134e0eb8bf4fb05f7caf6844475576b384e2e2e735cde68636a5ed-merged.mount: Deactivated successfully.
Feb 01 15:12:32 compute-0 podman[243530]: 2026-02-01 15:12:32.317193296 +0000 UTC m=+1.033753747 container remove 15b0266cf9734a032a6123eec06e512d123ec79c12df13f292f88d9423aca495 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goldwasser, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:12:32 compute-0 systemd[1]: libpod-conmon-15b0266cf9734a032a6123eec06e512d123ec79c12df13f292f88d9423aca495.scope: Deactivated successfully.
Feb 01 15:12:32 compute-0 sudo[243454]: pam_unix(sudo:session): session closed for user root
Feb 01 15:12:32 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:12:32 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:12:32 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:12:32 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:12:32 compute-0 sudo[243644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 15:12:32 compute-0 sudo[243644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:12:32 compute-0 sudo[243644]: pam_unix(sudo:session): session closed for user root
Feb 01 15:12:33 compute-0 ceph-mon[75179]: pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:33 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:12:33 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:12:34 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:35 compute-0 ceph-mon[75179]: pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:12:36 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:37 compute-0 ceph-mon[75179]: pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:38 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:39 compute-0 ceph-mon[75179]: pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:40 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:12:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Feb 01 15:12:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Feb 01 15:12:41 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Feb 01 15:12:41 compute-0 ceph-mon[75179]: pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:42 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:42 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Feb 01 15:12:42 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Feb 01 15:12:42 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Feb 01 15:12:42 compute-0 ceph-mon[75179]: osdmap e121: 3 total, 3 up, 3 in
Feb 01 15:12:43 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Feb 01 15:12:43 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Feb 01 15:12:43 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Feb 01 15:12:43 compute-0 ceph-mon[75179]: pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:43 compute-0 ceph-mon[75179]: osdmap e122: 3 total, 3 up, 3 in
Feb 01 15:12:44 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:44 compute-0 ceph-mon[75179]: osdmap e123: 3 total, 3 up, 3 in
Feb 01 15:12:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Feb 01 15:12:45 compute-0 ceph-mon[75179]: pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Feb 01 15:12:45 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Feb 01 15:12:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:12:46 compute-0 nova_compute[238794]: 2026-02-01 15:12:46.095 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:12:46 compute-0 nova_compute[238794]: 2026-02-01 15:12:46.096 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:12:46 compute-0 nova_compute[238794]: 2026-02-01 15:12:46.221 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:12:46 compute-0 nova_compute[238794]: 2026-02-01 15:12:46.222 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 01 15:12:46 compute-0 nova_compute[238794]: 2026-02-01 15:12:46.222 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 01 15:12:46 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 8.5 MiB/s wr, 78 op/s
Feb 01 15:12:46 compute-0 nova_compute[238794]: 2026-02-01 15:12:46.306 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 01 15:12:46 compute-0 nova_compute[238794]: 2026-02-01 15:12:46.306 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:12:46 compute-0 nova_compute[238794]: 2026-02-01 15:12:46.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:12:46 compute-0 nova_compute[238794]: 2026-02-01 15:12:46.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:12:46 compute-0 nova_compute[238794]: 2026-02-01 15:12:46.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 01 15:12:46 compute-0 ceph-mon[75179]: osdmap e124: 3 total, 3 up, 3 in
Feb 01 15:12:47 compute-0 nova_compute[238794]: 2026-02-01 15:12:47.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:12:47 compute-0 ceph-mon[75179]: pgmap v789: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 8.5 MiB/s wr, 78 op/s
Feb 01 15:12:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 6.8 MiB/s wr, 63 op/s
Feb 01 15:12:48 compute-0 nova_compute[238794]: 2026-02-01 15:12:48.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:12:48 compute-0 nova_compute[238794]: 2026-02-01 15:12:48.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:12:48 compute-0 nova_compute[238794]: 2026-02-01 15:12:48.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:12:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:12:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:12:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:12:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:12:48 compute-0 nova_compute[238794]: 2026-02-01 15:12:48.731 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:12:48 compute-0 nova_compute[238794]: 2026-02-01 15:12:48.731 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:12:48 compute-0 nova_compute[238794]: 2026-02-01 15:12:48.731 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:12:48 compute-0 nova_compute[238794]: 2026-02-01 15:12:48.732 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 01 15:12:48 compute-0 nova_compute[238794]: 2026-02-01 15:12:48.732 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:12:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:12:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:12:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:12:49 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2429162025' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:12:49 compute-0 nova_compute[238794]: 2026-02-01 15:12:49.223 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:12:49 compute-0 nova_compute[238794]: 2026-02-01 15:12:49.337 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 01 15:12:49 compute-0 nova_compute[238794]: 2026-02-01 15:12:49.339 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5124MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 01 15:12:49 compute-0 nova_compute[238794]: 2026-02-01 15:12:49.339 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:12:49 compute-0 nova_compute[238794]: 2026-02-01 15:12:49.339 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:12:49 compute-0 nova_compute[238794]: 2026-02-01 15:12:49.486 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 01 15:12:49 compute-0 nova_compute[238794]: 2026-02-01 15:12:49.486 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 01 15:12:49 compute-0 nova_compute[238794]: 2026-02-01 15:12:49.519 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:12:49 compute-0 ceph-mon[75179]: pgmap v790: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 6.8 MiB/s wr, 63 op/s
Feb 01 15:12:49 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2429162025' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:12:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:12:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2556144620' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:12:50 compute-0 nova_compute[238794]: 2026-02-01 15:12:50.020 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:12:50 compute-0 nova_compute[238794]: 2026-02-01 15:12:50.027 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 01 15:12:50 compute-0 nova_compute[238794]: 2026-02-01 15:12:50.194 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 01 15:12:50 compute-0 nova_compute[238794]: 2026-02-01 15:12:50.195 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 01 15:12:50 compute-0 nova_compute[238794]: 2026-02-01 15:12:50.195 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:12:50 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.2 MiB/s wr, 48 op/s
Feb 01 15:12:50 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2556144620' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:12:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 01 15:12:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3098049236' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:12:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 01 15:12:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3098049236' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:12:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:12:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Feb 01 15:12:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Feb 01 15:12:51 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Feb 01 15:12:51 compute-0 ceph-mon[75179]: pgmap v791: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.2 MiB/s wr, 48 op/s
Feb 01 15:12:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3098049236' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:12:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3098049236' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:12:51 compute-0 ceph-mon[75179]: osdmap e125: 3 total, 3 up, 3 in
Feb 01 15:12:51 compute-0 podman[243713]: 2026-02-01 15:12:51.991884402 +0000 UTC m=+0.076031435 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 01 15:12:52 compute-0 podman[243714]: 2026-02-01 15:12:52.03099485 +0000 UTC m=+0.114989649 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 01 15:12:52 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Feb 01 15:12:53 compute-0 ceph-mon[75179]: pgmap v793: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Feb 01 15:12:54 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.7 MiB/s wr, 43 op/s
Feb 01 15:12:55 compute-0 ceph-mon[75179]: pgmap v794: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.7 MiB/s wr, 43 op/s
Feb 01 15:12:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:12:56 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:57 compute-0 ceph-mon[75179]: pgmap v795: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:58 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:12:59 compute-0 ceph-mon[75179]: pgmap v796: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:13:00 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:13:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:13:01 compute-0 ceph-mon[75179]: pgmap v797: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:13:02 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:13:03 compute-0 ceph-mon[75179]: pgmap v798: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:13:04 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:13:05 compute-0 ceph-mon[75179]: pgmap v799: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:13:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:13:06 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:13:07 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:13:07 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:13:07 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:07.161+0000 7f8267782640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:13:07 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/20a8c9a2-cfa0-44d6-b2f2-a4472dc96dd6'.
Feb 01 15:13:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb 01 15:13:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb 01 15:13:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:13:07 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "format": "json"}]: dispatch
Feb 01 15:13:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:13:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:13:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:13:07 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:13:07.808 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:13:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:13:07.809 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:13:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:13:07.809 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:13:07 compute-0 ceph-mon[75179]: pgmap v800: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:13:07 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:07 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "915665b7-ff70-4faa-88a3-0d32becf6f29", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:915665b7-ff70-4faa-88a3-0d32becf6f29, vol_name:cephfs) < ""
Feb 01 15:13:07 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/915665b7-ff70-4faa-88a3-0d32becf6f29/49597151-cba1-48e5-979e-cda79388de34'.
Feb 01 15:13:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/915665b7-ff70-4faa-88a3-0d32becf6f29/.meta.tmp'
Feb 01 15:13:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/915665b7-ff70-4faa-88a3-0d32becf6f29/.meta.tmp' to config b'/volumes/_nogroup/915665b7-ff70-4faa-88a3-0d32becf6f29/.meta'
Feb 01 15:13:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:915665b7-ff70-4faa-88a3-0d32becf6f29, vol_name:cephfs) < ""
Feb 01 15:13:07 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "915665b7-ff70-4faa-88a3-0d32becf6f29", "format": "json"}]: dispatch
Feb 01 15:13:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:915665b7-ff70-4faa-88a3-0d32becf6f29, vol_name:cephfs) < ""
Feb 01 15:13:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:915665b7-ff70-4faa-88a3-0d32becf6f29, vol_name:cephfs) < ""
Feb 01 15:13:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:13:07 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:08 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:13:08 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "618c0e6c-2fb1-44ff-85f4-15df368e2591", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:08 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb 01 15:13:08 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/618c0e6c-2fb1-44ff-85f4-15df368e2591/0d56fdbc-9c41-43b1-9fb0-657d8d49f4ff'.
Feb 01 15:13:08 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/618c0e6c-2fb1-44ff-85f4-15df368e2591/.meta.tmp'
Feb 01 15:13:08 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/618c0e6c-2fb1-44ff-85f4-15df368e2591/.meta.tmp' to config b'/volumes/_nogroup/618c0e6c-2fb1-44ff-85f4-15df368e2591/.meta'
Feb 01 15:13:08 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb 01 15:13:08 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "618c0e6c-2fb1-44ff-85f4-15df368e2591", "format": "json"}]: dispatch
Feb 01 15:13:08 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb 01 15:13:08 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb 01 15:13:08 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:13:08 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:08 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.viosrg(active, since 22m)
Feb 01 15:13:08 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:08 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "format": "json"}]: dispatch
Feb 01 15:13:08 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "915665b7-ff70-4faa-88a3-0d32becf6f29", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:08 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "915665b7-ff70-4faa-88a3-0d32becf6f29", "format": "json"}]: dispatch
Feb 01 15:13:08 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:08 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:09 compute-0 ceph-mon[75179]: pgmap v801: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:13:09 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "618c0e6c-2fb1-44ff-85f4-15df368e2591", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:09 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "618c0e6c-2fb1-44ff-85f4-15df368e2591", "format": "json"}]: dispatch
Feb 01 15:13:09 compute-0 ceph-mon[75179]: mgrmap e10: compute-0.viosrg(active, since 22m)
Feb 01 15:13:10 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:13:10 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bde02bc8-059b-4cad-a246-c96036843cf2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:10 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bde02bc8-059b-4cad-a246-c96036843cf2, vol_name:cephfs) < ""
Feb 01 15:13:10 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:13:10.921 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:64:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '76:0c:13:64:99:39'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 01 15:13:10 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:13:10.923 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 01 15:13:10 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/bde02bc8-059b-4cad-a246-c96036843cf2/beffe961-0742-4156-ad43-3b52285fd640'.
Feb 01 15:13:10 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bde02bc8-059b-4cad-a246-c96036843cf2/.meta.tmp'
Feb 01 15:13:10 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bde02bc8-059b-4cad-a246-c96036843cf2/.meta.tmp' to config b'/volumes/_nogroup/bde02bc8-059b-4cad-a246-c96036843cf2/.meta'
Feb 01 15:13:10 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bde02bc8-059b-4cad-a246-c96036843cf2, vol_name:cephfs) < ""
Feb 01 15:13:10 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bde02bc8-059b-4cad-a246-c96036843cf2", "format": "json"}]: dispatch
Feb 01 15:13:10 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bde02bc8-059b-4cad-a246-c96036843cf2, vol_name:cephfs) < ""
Feb 01 15:13:10 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bde02bc8-059b-4cad-a246-c96036843cf2, vol_name:cephfs) < ""
Feb 01 15:13:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:13:10 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:13:11 compute-0 ceph-mon[75179]: pgmap v802: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:13:11 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bde02bc8-059b-4cad-a246-c96036843cf2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:11 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bde02bc8-059b-4cad-a246-c96036843cf2", "format": "json"}]: dispatch
Feb 01 15:13:11 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:12 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "618c0e6c-2fb1-44ff-85f4-15df368e2591", "snap_name": "6676d9d7-897a-4be1-9444-f94c4c5eb9e9", "format": "json"}]: dispatch
Feb 01 15:13:12 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:6676d9d7-897a-4be1-9444-f94c4c5eb9e9, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb 01 15:13:12 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:6676d9d7-897a-4be1-9444-f94c4c5eb9e9, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb 01 15:13:12 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s wr, 2 op/s
Feb 01 15:13:13 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "618c0e6c-2fb1-44ff-85f4-15df368e2591", "snap_name": "6676d9d7-897a-4be1-9444-f94c4c5eb9e9", "format": "json"}]: dispatch
Feb 01 15:13:13 compute-0 ceph-mon[75179]: pgmap v803: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s wr, 2 op/s
Feb 01 15:13:14 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s wr, 2 op/s
Feb 01 15:13:15 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "bde02bc8-059b-4cad-a246-c96036843cf2", "new_size": 2147483648, "format": "json"}]: dispatch
Feb 01 15:13:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:bde02bc8-059b-4cad-a246-c96036843cf2, vol_name:cephfs) < ""
Feb 01 15:13:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:bde02bc8-059b-4cad-a246-c96036843cf2, vol_name:cephfs) < ""
Feb 01 15:13:15 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "915665b7-ff70-4faa-88a3-0d32becf6f29", "format": "json"}]: dispatch
Feb 01 15:13:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:915665b7-ff70-4faa-88a3-0d32becf6f29, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:13:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:915665b7-ff70-4faa-88a3-0d32becf6f29, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:13:15 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '915665b7-ff70-4faa-88a3-0d32becf6f29' of type subvolume
Feb 01 15:13:15 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:15.699+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '915665b7-ff70-4faa-88a3-0d32becf6f29' of type subvolume
Feb 01 15:13:15 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "915665b7-ff70-4faa-88a3-0d32becf6f29", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:915665b7-ff70-4faa-88a3-0d32becf6f29, vol_name:cephfs) < ""
Feb 01 15:13:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/915665b7-ff70-4faa-88a3-0d32becf6f29'' moved to trashcan
Feb 01 15:13:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:13:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:915665b7-ff70-4faa-88a3-0d32becf6f29, vol_name:cephfs) < ""
Feb 01 15:13:15 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:13:15 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:13:15 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:13:15 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:13:15 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:15.717+0000 7f8269f87640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:13:15 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:13:15 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:15.717+0000 7f8269f87640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:13:15 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:15.717+0000 7f8269f87640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:13:15 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:15.717+0000 7f8269f87640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:13:15 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:15.717+0000 7f8269f87640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:13:15 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:13:15 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:13:15 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:13:15 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:13:15 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:13:15 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:15.736+0000 7f8269786640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:13:15 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:15.736+0000 7f8269786640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:13:15 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:15.736+0000 7f8269786640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:13:15 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:15.736+0000 7f8269786640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:13:15 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:15.736+0000 7f8269786640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:13:15 compute-0 ceph-mon[75179]: pgmap v804: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s wr, 2 op/s
Feb 01 15:13:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:13:16 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bde02bc8-059b-4cad-a246-c96036843cf2", "format": "json"}]: dispatch
Feb 01 15:13:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bde02bc8-059b-4cad-a246-c96036843cf2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:13:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bde02bc8-059b-4cad-a246-c96036843cf2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:13:16 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bde02bc8-059b-4cad-a246-c96036843cf2' of type subvolume
Feb 01 15:13:16 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:16.127+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bde02bc8-059b-4cad-a246-c96036843cf2' of type subvolume
Feb 01 15:13:16 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bde02bc8-059b-4cad-a246-c96036843cf2", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bde02bc8-059b-4cad-a246-c96036843cf2, vol_name:cephfs) < ""
Feb 01 15:13:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bde02bc8-059b-4cad-a246-c96036843cf2'' moved to trashcan
Feb 01 15:13:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:13:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bde02bc8-059b-4cad-a246-c96036843cf2, vol_name:cephfs) < ""
Feb 01 15:13:16 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 4 op/s
Feb 01 15:13:16 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "bde02bc8-059b-4cad-a246-c96036843cf2", "new_size": 2147483648, "format": "json"}]: dispatch
Feb 01 15:13:16 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "915665b7-ff70-4faa-88a3-0d32becf6f29", "format": "json"}]: dispatch
Feb 01 15:13:16 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "915665b7-ff70-4faa-88a3-0d32becf6f29", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:17 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.viosrg(active, since 22m)
Feb 01 15:13:17 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "618c0e6c-2fb1-44ff-85f4-15df368e2591", "snap_name": "6676d9d7-897a-4be1-9444-f94c4c5eb9e9_89fefdfc-5a05-4ed0-8819-b63c5620160b", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:17 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6676d9d7-897a-4be1-9444-f94c4c5eb9e9_89fefdfc-5a05-4ed0-8819-b63c5620160b, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb 01 15:13:17 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/618c0e6c-2fb1-44ff-85f4-15df368e2591/.meta.tmp'
Feb 01 15:13:17 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/618c0e6c-2fb1-44ff-85f4-15df368e2591/.meta.tmp' to config b'/volumes/_nogroup/618c0e6c-2fb1-44ff-85f4-15df368e2591/.meta'
Feb 01 15:13:17 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6676d9d7-897a-4be1-9444-f94c4c5eb9e9_89fefdfc-5a05-4ed0-8819-b63c5620160b, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb 01 15:13:17 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "618c0e6c-2fb1-44ff-85f4-15df368e2591", "snap_name": "6676d9d7-897a-4be1-9444-f94c4c5eb9e9", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:17 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6676d9d7-897a-4be1-9444-f94c4c5eb9e9, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb 01 15:13:17 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/618c0e6c-2fb1-44ff-85f4-15df368e2591/.meta.tmp'
Feb 01 15:13:17 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/618c0e6c-2fb1-44ff-85f4-15df368e2591/.meta.tmp' to config b'/volumes/_nogroup/618c0e6c-2fb1-44ff-85f4-15df368e2591/.meta'
Feb 01 15:13:17 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6676d9d7-897a-4be1-9444-f94c4c5eb9e9, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb 01 15:13:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:13:17
Feb 01 15:13:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:13:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:13:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['default.rgw.control', 'backups', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'default.rgw.meta']
Feb 01 15:13:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:13:18 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bde02bc8-059b-4cad-a246-c96036843cf2", "format": "json"}]: dispatch
Feb 01 15:13:18 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bde02bc8-059b-4cad-a246-c96036843cf2", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:18 compute-0 ceph-mon[75179]: pgmap v805: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 4 op/s
Feb 01 15:13:18 compute-0 ceph-mon[75179]: mgrmap e11: compute-0.viosrg(active, since 22m)
Feb 01 15:13:18 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 4 op/s
Feb 01 15:13:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:13:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:13:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:13:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:13:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:13:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:13:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:13:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:13:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:13:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:13:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:13:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:13:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:13:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:13:18 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:13:18.925 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 01 15:13:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:13:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:13:19 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "618c0e6c-2fb1-44ff-85f4-15df368e2591", "snap_name": "6676d9d7-897a-4be1-9444-f94c4c5eb9e9_89fefdfc-5a05-4ed0-8819-b63c5620160b", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:19 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "618c0e6c-2fb1-44ff-85f4-15df368e2591", "snap_name": "6676d9d7-897a-4be1-9444-f94c4c5eb9e9", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:19 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bd6fb31c-809d-4c83-9761-28c8527a3b81", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:19 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bd6fb31c-809d-4c83-9761-28c8527a3b81, vol_name:cephfs) < ""
Feb 01 15:13:19 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/bd6fb31c-809d-4c83-9761-28c8527a3b81/b210dff2-6407-4abe-a039-ae386c608b9f'.
Feb 01 15:13:19 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bd6fb31c-809d-4c83-9761-28c8527a3b81/.meta.tmp'
Feb 01 15:13:19 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bd6fb31c-809d-4c83-9761-28c8527a3b81/.meta.tmp' to config b'/volumes/_nogroup/bd6fb31c-809d-4c83-9761-28c8527a3b81/.meta'
Feb 01 15:13:19 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bd6fb31c-809d-4c83-9761-28c8527a3b81, vol_name:cephfs) < ""
Feb 01 15:13:19 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bd6fb31c-809d-4c83-9761-28c8527a3b81", "format": "json"}]: dispatch
Feb 01 15:13:19 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bd6fb31c-809d-4c83-9761-28c8527a3b81, vol_name:cephfs) < ""
Feb 01 15:13:19 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bd6fb31c-809d-4c83-9761-28c8527a3b81, vol_name:cephfs) < ""
Feb 01 15:13:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:13:19 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:20 compute-0 ceph-mon[75179]: pgmap v806: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 4 op/s
Feb 01 15:13:20 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:20 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 4 op/s
Feb 01 15:13:20 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "55a7edb3-0742-4b44-9cb7-64d96e0ec803", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:55a7edb3-0742-4b44-9cb7-64d96e0ec803, vol_name:cephfs) < ""
Feb 01 15:13:20 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/55a7edb3-0742-4b44-9cb7-64d96e0ec803/4aec6ca1-043b-4958-8d9c-898a56795b18'.
Feb 01 15:13:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/55a7edb3-0742-4b44-9cb7-64d96e0ec803/.meta.tmp'
Feb 01 15:13:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/55a7edb3-0742-4b44-9cb7-64d96e0ec803/.meta.tmp' to config b'/volumes/_nogroup/55a7edb3-0742-4b44-9cb7-64d96e0ec803/.meta'
Feb 01 15:13:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:55a7edb3-0742-4b44-9cb7-64d96e0ec803, vol_name:cephfs) < ""
Feb 01 15:13:20 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "55a7edb3-0742-4b44-9cb7-64d96e0ec803", "format": "json"}]: dispatch
Feb 01 15:13:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:55a7edb3-0742-4b44-9cb7-64d96e0ec803, vol_name:cephfs) < ""
Feb 01 15:13:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:55a7edb3-0742-4b44-9cb7-64d96e0ec803, vol_name:cephfs) < ""
Feb 01 15:13:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:13:20 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:20 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "618c0e6c-2fb1-44ff-85f4-15df368e2591", "format": "json"}]: dispatch
Feb 01 15:13:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:13:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:13:20 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '618c0e6c-2fb1-44ff-85f4-15df368e2591' of type subvolume
Feb 01 15:13:20 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:20.747+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '618c0e6c-2fb1-44ff-85f4-15df368e2591' of type subvolume
Feb 01 15:13:20 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "618c0e6c-2fb1-44ff-85f4-15df368e2591", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb 01 15:13:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/618c0e6c-2fb1-44ff-85f4-15df368e2591'' moved to trashcan
Feb 01 15:13:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:13:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:618c0e6c-2fb1-44ff-85f4-15df368e2591, vol_name:cephfs) < ""
Feb 01 15:13:21 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bd6fb31c-809d-4c83-9761-28c8527a3b81", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:21 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bd6fb31c-809d-4c83-9761-28c8527a3b81", "format": "json"}]: dispatch
Feb 01 15:13:21 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:13:22 compute-0 ceph-mon[75179]: pgmap v807: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 4 op/s
Feb 01 15:13:22 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "55a7edb3-0742-4b44-9cb7-64d96e0ec803", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:22 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "55a7edb3-0742-4b44-9cb7-64d96e0ec803", "format": "json"}]: dispatch
Feb 01 15:13:22 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "618c0e6c-2fb1-44ff-85f4-15df368e2591", "format": "json"}]: dispatch
Feb 01 15:13:22 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "618c0e6c-2fb1-44ff-85f4-15df368e2591", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:22 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 26 KiB/s wr, 8 op/s
Feb 01 15:13:22 compute-0 podman[243795]: 2026-02-01 15:13:22.977363795 +0000 UTC m=+0.065612901 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Feb 01 15:13:22 compute-0 podman[243794]: 2026-02-01 15:13:22.978772585 +0000 UTC m=+0.070909890 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Feb 01 15:13:23 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "bd6fb31c-809d-4c83-9761-28c8527a3b81", "new_size": 2147483648, "format": "json"}]: dispatch
Feb 01 15:13:23 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:bd6fb31c-809d-4c83-9761-28c8527a3b81, vol_name:cephfs) < ""
Feb 01 15:13:23 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:bd6fb31c-809d-4c83-9761-28c8527a3b81, vol_name:cephfs) < ""
Feb 01 15:13:24 compute-0 ceph-mon[75179]: pgmap v808: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 26 KiB/s wr, 8 op/s
Feb 01 15:13:24 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 19 KiB/s wr, 6 op/s
Feb 01 15:13:24 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bd6fb31c-809d-4c83-9761-28c8527a3b81", "format": "json"}]: dispatch
Feb 01 15:13:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bd6fb31c-809d-4c83-9761-28c8527a3b81, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:13:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bd6fb31c-809d-4c83-9761-28c8527a3b81, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:13:24 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bd6fb31c-809d-4c83-9761-28c8527a3b81' of type subvolume
Feb 01 15:13:24 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:24.716+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bd6fb31c-809d-4c83-9761-28c8527a3b81' of type subvolume
Feb 01 15:13:24 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bd6fb31c-809d-4c83-9761-28c8527a3b81", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bd6fb31c-809d-4c83-9761-28c8527a3b81, vol_name:cephfs) < ""
Feb 01 15:13:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bd6fb31c-809d-4c83-9761-28c8527a3b81'' moved to trashcan
Feb 01 15:13:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:13:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bd6fb31c-809d-4c83-9761-28c8527a3b81, vol_name:cephfs) < ""
Feb 01 15:13:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Feb 01 15:13:25 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "bd6fb31c-809d-4c83-9761-28c8527a3b81", "new_size": 2147483648, "format": "json"}]: dispatch
Feb 01 15:13:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Feb 01 15:13:25 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Feb 01 15:13:26 compute-0 ceph-mon[75179]: pgmap v809: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 19 KiB/s wr, 6 op/s
Feb 01 15:13:26 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bd6fb31c-809d-4c83-9761-28c8527a3b81", "format": "json"}]: dispatch
Feb 01 15:13:26 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bd6fb31c-809d-4c83-9761-28c8527a3b81", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:26 compute-0 ceph-mon[75179]: osdmap e126: 3 total, 3 up, 3 in
Feb 01 15:13:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:13:26 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 27 KiB/s wr, 8 op/s
Feb 01 15:13:26 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "55a7edb3-0742-4b44-9cb7-64d96e0ec803", "format": "json"}]: dispatch
Feb 01 15:13:26 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:55a7edb3-0742-4b44-9cb7-64d96e0ec803, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:13:26 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:55a7edb3-0742-4b44-9cb7-64d96e0ec803, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:13:26 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '55a7edb3-0742-4b44-9cb7-64d96e0ec803' of type subvolume
Feb 01 15:13:26 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:26.362+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '55a7edb3-0742-4b44-9cb7-64d96e0ec803' of type subvolume
Feb 01 15:13:26 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "55a7edb3-0742-4b44-9cb7-64d96e0ec803", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:26 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:55a7edb3-0742-4b44-9cb7-64d96e0ec803, vol_name:cephfs) < ""
Feb 01 15:13:26 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/55a7edb3-0742-4b44-9cb7-64d96e0ec803'' moved to trashcan
Feb 01 15:13:26 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:13:26 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:55a7edb3-0742-4b44-9cb7-64d96e0ec803, vol_name:cephfs) < ""
Feb 01 15:13:28 compute-0 ceph-mon[75179]: pgmap v811: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 27 KiB/s wr, 8 op/s
Feb 01 15:13:28 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "55a7edb3-0742-4b44-9cb7-64d96e0ec803", "format": "json"}]: dispatch
Feb 01 15:13:28 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "55a7edb3-0742-4b44-9cb7-64d96e0ec803", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659498995322459 of space, bias 1.0, pg target 0.19978496985967376 quantized to 32 (current 32)
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.837368437979474e-06 of space, bias 4.0, pg target 0.00940484212557537 quantized to 16 (current 16)
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 27 KiB/s wr, 8 op/s
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3147dea9-81aa-476a-8ff6-685b8fe5fd2e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3147dea9-81aa-476a-8ff6-685b8fe5fd2e, vol_name:cephfs) < ""
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/3147dea9-81aa-476a-8ff6-685b8fe5fd2e/cc27705c-3e9e-4106-8c7b-7566003143da'.
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3147dea9-81aa-476a-8ff6-685b8fe5fd2e/.meta.tmp'
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3147dea9-81aa-476a-8ff6-685b8fe5fd2e/.meta.tmp' to config b'/volumes/_nogroup/3147dea9-81aa-476a-8ff6-685b8fe5fd2e/.meta'
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3147dea9-81aa-476a-8ff6-685b8fe5fd2e, vol_name:cephfs) < ""
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3147dea9-81aa-476a-8ff6-685b8fe5fd2e", "format": "json"}]: dispatch
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3147dea9-81aa-476a-8ff6-685b8fe5fd2e, vol_name:cephfs) < ""
Feb 01 15:13:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3147dea9-81aa-476a-8ff6-685b8fe5fd2e, vol_name:cephfs) < ""
Feb 01 15:13:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:13:28 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:29 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:30 compute-0 ceph-mon[75179]: pgmap v812: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 27 KiB/s wr, 8 op/s
Feb 01 15:13:30 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3147dea9-81aa-476a-8ff6-685b8fe5fd2e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:30 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3147dea9-81aa-476a-8ff6-685b8fe5fd2e", "format": "json"}]: dispatch
Feb 01 15:13:30 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 27 KiB/s wr, 8 op/s
Feb 01 15:13:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:13:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Feb 01 15:13:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Feb 01 15:13:31 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Feb 01 15:13:32 compute-0 ceph-mon[75179]: pgmap v813: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 27 KiB/s wr, 8 op/s
Feb 01 15:13:32 compute-0 ceph-mon[75179]: osdmap e127: 3 total, 3 up, 3 in
Feb 01 15:13:32 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3147dea9-81aa-476a-8ff6-685b8fe5fd2e", "format": "json"}]: dispatch
Feb 01 15:13:32 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3147dea9-81aa-476a-8ff6-685b8fe5fd2e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:13:32 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3147dea9-81aa-476a-8ff6-685b8fe5fd2e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:13:32 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3147dea9-81aa-476a-8ff6-685b8fe5fd2e' of type subvolume
Feb 01 15:13:32 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:32.103+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3147dea9-81aa-476a-8ff6-685b8fe5fd2e' of type subvolume
Feb 01 15:13:32 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3147dea9-81aa-476a-8ff6-685b8fe5fd2e", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:32 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3147dea9-81aa-476a-8ff6-685b8fe5fd2e, vol_name:cephfs) < ""
Feb 01 15:13:32 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3147dea9-81aa-476a-8ff6-685b8fe5fd2e'' moved to trashcan
Feb 01 15:13:32 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:13:32 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3147dea9-81aa-476a-8ff6-685b8fe5fd2e, vol_name:cephfs) < ""
Feb 01 15:13:32 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 23 KiB/s wr, 8 op/s
Feb 01 15:13:32 compute-0 sudo[243840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:13:32 compute-0 sudo[243840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:13:32 compute-0 sudo[243840]: pam_unix(sudo:session): session closed for user root
Feb 01 15:13:32 compute-0 sudo[243865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 01 15:13:32 compute-0 sudo[243865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:13:33 compute-0 podman[243934]: 2026-02-01 15:13:33.008688026 +0000 UTC m=+0.083153674 container exec 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 01 15:13:33 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3147dea9-81aa-476a-8ff6-685b8fe5fd2e", "format": "json"}]: dispatch
Feb 01 15:13:33 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3147dea9-81aa-476a-8ff6-685b8fe5fd2e", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:33 compute-0 ceph-mon[75179]: pgmap v815: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 23 KiB/s wr, 8 op/s
Feb 01 15:13:33 compute-0 podman[243934]: 2026-02-01 15:13:33.162737807 +0000 UTC m=+0.237203395 container exec_died 75630865abcd7bee35ae3b43cb40408cf6d8699a4275eedeb371de43d40c7f41 (image=quay.io/ceph/ceph:v20, name=ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 01 15:13:33 compute-0 sudo[243865]: pam_unix(sudo:session): session closed for user root
Feb 01 15:13:33 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:13:33 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:13:33 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:13:33 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:13:33 compute-0 sudo[244123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:13:33 compute-0 sudo[244123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:13:33 compute-0 sudo[244123]: pam_unix(sudo:session): session closed for user root
Feb 01 15:13:33 compute-0 sudo[244148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 15:13:33 compute-0 sudo[244148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:13:34 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 665 B/s rd, 20 KiB/s wr, 7 op/s
Feb 01 15:13:34 compute-0 sudo[244148]: pam_unix(sudo:session): session closed for user root
Feb 01 15:13:34 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:13:34 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:13:34 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 15:13:34 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:13:34 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 15:13:34 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:13:34 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 15:13:34 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:13:34 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 15:13:34 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:13:34 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:13:34 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:13:34 compute-0 sudo[244204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:13:34 compute-0 sudo[244204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:13:34 compute-0 sudo[244204]: pam_unix(sudo:session): session closed for user root
Feb 01 15:13:34 compute-0 sudo[244229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 15:13:34 compute-0 sudo[244229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:13:34 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b6c72970-f609-412a-968d-5d3fe02bddc0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:34 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b6c72970-f609-412a-968d-5d3fe02bddc0, vol_name:cephfs) < ""
Feb 01 15:13:34 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/b6c72970-f609-412a-968d-5d3fe02bddc0/d2950875-19b2-4633-8278-a9181fa57d3d'.
Feb 01 15:13:34 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b6c72970-f609-412a-968d-5d3fe02bddc0/.meta.tmp'
Feb 01 15:13:34 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b6c72970-f609-412a-968d-5d3fe02bddc0/.meta.tmp' to config b'/volumes/_nogroup/b6c72970-f609-412a-968d-5d3fe02bddc0/.meta'
Feb 01 15:13:34 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b6c72970-f609-412a-968d-5d3fe02bddc0, vol_name:cephfs) < ""
Feb 01 15:13:34 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b6c72970-f609-412a-968d-5d3fe02bddc0", "format": "json"}]: dispatch
Feb 01 15:13:34 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b6c72970-f609-412a-968d-5d3fe02bddc0, vol_name:cephfs) < ""
Feb 01 15:13:34 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b6c72970-f609-412a-968d-5d3fe02bddc0, vol_name:cephfs) < ""
Feb 01 15:13:34 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:13:34 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:34 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:13:34 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:13:34 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:13:34 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:13:34 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:13:34 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:13:34 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:13:34 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:13:34 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:34 compute-0 podman[244266]: 2026-02-01 15:13:34.881062356 +0000 UTC m=+0.045680612 container create 98bbce7edeb22b9a4b9eff66a8b21292cf6cb3c9b74d1d93a218917b412f29f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Feb 01 15:13:34 compute-0 systemd[1]: Started libpod-conmon-98bbce7edeb22b9a4b9eff66a8b21292cf6cb3c9b74d1d93a218917b412f29f0.scope.
Feb 01 15:13:34 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:13:34 compute-0 podman[244266]: 2026-02-01 15:13:34.863094613 +0000 UTC m=+0.027712909 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:13:34 compute-0 podman[244266]: 2026-02-01 15:13:34.969667072 +0000 UTC m=+0.134285358 container init 98bbce7edeb22b9a4b9eff66a8b21292cf6cb3c9b74d1d93a218917b412f29f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 01 15:13:34 compute-0 podman[244266]: 2026-02-01 15:13:34.975936498 +0000 UTC m=+0.140554794 container start 98bbce7edeb22b9a4b9eff66a8b21292cf6cb3c9b74d1d93a218917b412f29f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 01 15:13:34 compute-0 podman[244266]: 2026-02-01 15:13:34.979917079 +0000 UTC m=+0.144535335 container attach 98bbce7edeb22b9a4b9eff66a8b21292cf6cb3c9b74d1d93a218917b412f29f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:13:34 compute-0 objective_mccarthy[244283]: 167 167
Feb 01 15:13:34 compute-0 systemd[1]: libpod-98bbce7edeb22b9a4b9eff66a8b21292cf6cb3c9b74d1d93a218917b412f29f0.scope: Deactivated successfully.
Feb 01 15:13:34 compute-0 podman[244266]: 2026-02-01 15:13:34.981433282 +0000 UTC m=+0.146051538 container died 98bbce7edeb22b9a4b9eff66a8b21292cf6cb3c9b74d1d93a218917b412f29f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mccarthy, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:13:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-70ec4744502c9c0b54fc2afb407b35538ed15da40ad756020f8f33434ac9bb78-merged.mount: Deactivated successfully.
Feb 01 15:13:35 compute-0 podman[244266]: 2026-02-01 15:13:35.080091579 +0000 UTC m=+0.244709845 container remove 98bbce7edeb22b9a4b9eff66a8b21292cf6cb3c9b74d1d93a218917b412f29f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mccarthy, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 01 15:13:35 compute-0 systemd[1]: libpod-conmon-98bbce7edeb22b9a4b9eff66a8b21292cf6cb3c9b74d1d93a218917b412f29f0.scope: Deactivated successfully.
Feb 01 15:13:35 compute-0 podman[244306]: 2026-02-01 15:13:35.227374511 +0000 UTC m=+0.040622411 container create 9769bff1b7f43e08e29adfd9810f8874ad92e032e1fac6745b15d100502ff4b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kapitsa, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:13:35 compute-0 systemd[1]: Started libpod-conmon-9769bff1b7f43e08e29adfd9810f8874ad92e032e1fac6745b15d100502ff4b7.scope.
Feb 01 15:13:35 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5c8330163e16f4feef19450a1a9319ad6b30a3b0a2ebc98628e41264a3bd45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5c8330163e16f4feef19450a1a9319ad6b30a3b0a2ebc98628e41264a3bd45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5c8330163e16f4feef19450a1a9319ad6b30a3b0a2ebc98628e41264a3bd45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5c8330163e16f4feef19450a1a9319ad6b30a3b0a2ebc98628e41264a3bd45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5c8330163e16f4feef19450a1a9319ad6b30a3b0a2ebc98628e41264a3bd45/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 15:13:35 compute-0 podman[244306]: 2026-02-01 15:13:35.208356097 +0000 UTC m=+0.021603997 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:13:35 compute-0 podman[244306]: 2026-02-01 15:13:35.357184662 +0000 UTC m=+0.170432632 container init 9769bff1b7f43e08e29adfd9810f8874ad92e032e1fac6745b15d100502ff4b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:13:35 compute-0 podman[244306]: 2026-02-01 15:13:35.363843389 +0000 UTC m=+0.177091259 container start 9769bff1b7f43e08e29adfd9810f8874ad92e032e1fac6745b15d100502ff4b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 01 15:13:35 compute-0 podman[244306]: 2026-02-01 15:13:35.368039746 +0000 UTC m=+0.181287636 container attach 9769bff1b7f43e08e29adfd9810f8874ad92e032e1fac6745b15d100502ff4b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 01 15:13:35 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dc838023-ada6-4f22-947b-32f93b678270", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dc838023-ada6-4f22-947b-32f93b678270, vol_name:cephfs) < ""
Feb 01 15:13:35 compute-0 gallant_kapitsa[244322]: --> passed data devices: 0 physical, 3 LVM
Feb 01 15:13:35 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 01 15:13:35 compute-0 gallant_kapitsa[244322]: --> All data devices are unavailable
Feb 01 15:13:35 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/dc838023-ada6-4f22-947b-32f93b678270/fec65527-d4a9-4f7c-a85e-6e18557fd6b3'.
Feb 01 15:13:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dc838023-ada6-4f22-947b-32f93b678270/.meta.tmp'
Feb 01 15:13:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dc838023-ada6-4f22-947b-32f93b678270/.meta.tmp' to config b'/volumes/_nogroup/dc838023-ada6-4f22-947b-32f93b678270/.meta'
Feb 01 15:13:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dc838023-ada6-4f22-947b-32f93b678270, vol_name:cephfs) < ""
Feb 01 15:13:35 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dc838023-ada6-4f22-947b-32f93b678270", "format": "json"}]: dispatch
Feb 01 15:13:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dc838023-ada6-4f22-947b-32f93b678270, vol_name:cephfs) < ""
Feb 01 15:13:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dc838023-ada6-4f22-947b-32f93b678270, vol_name:cephfs) < ""
Feb 01 15:13:35 compute-0 systemd[1]: libpod-9769bff1b7f43e08e29adfd9810f8874ad92e032e1fac6745b15d100502ff4b7.scope: Deactivated successfully.
Feb 01 15:13:35 compute-0 podman[244306]: 2026-02-01 15:13:35.818474491 +0000 UTC m=+0.631722361 container died 9769bff1b7f43e08e29adfd9810f8874ad92e032e1fac6745b15d100502ff4b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:13:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:13:35 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-be5c8330163e16f4feef19450a1a9319ad6b30a3b0a2ebc98628e41264a3bd45-merged.mount: Deactivated successfully.
Feb 01 15:13:35 compute-0 podman[244306]: 2026-02-01 15:13:35.867608369 +0000 UTC m=+0.680856259 container remove 9769bff1b7f43e08e29adfd9810f8874ad92e032e1fac6745b15d100502ff4b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kapitsa, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:13:35 compute-0 systemd[1]: libpod-conmon-9769bff1b7f43e08e29adfd9810f8874ad92e032e1fac6745b15d100502ff4b7.scope: Deactivated successfully.
Feb 01 15:13:35 compute-0 ceph-mon[75179]: pgmap v816: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 665 B/s rd, 20 KiB/s wr, 7 op/s
Feb 01 15:13:35 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b6c72970-f609-412a-968d-5d3fe02bddc0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:35 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b6c72970-f609-412a-968d-5d3fe02bddc0", "format": "json"}]: dispatch
Feb 01 15:13:35 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:35 compute-0 sudo[244229]: pam_unix(sudo:session): session closed for user root
Feb 01 15:13:35 compute-0 sudo[244356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:13:35 compute-0 sudo[244356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:13:35 compute-0 sudo[244356]: pam_unix(sudo:session): session closed for user root
Feb 01 15:13:36 compute-0 sudo[244381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 15:13:36 compute-0 sudo[244381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:13:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:13:36 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 17 KiB/s wr, 6 op/s
Feb 01 15:13:36 compute-0 podman[244418]: 2026-02-01 15:13:36.302685092 +0000 UTC m=+0.059698035 container create 54151a97dcf7f21b55f385b3a581acfe533bc6987864946d2885ef6a1917572c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kirch, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 01 15:13:36 compute-0 systemd[1]: Started libpod-conmon-54151a97dcf7f21b55f385b3a581acfe533bc6987864946d2885ef6a1917572c.scope.
Feb 01 15:13:36 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:13:36 compute-0 podman[244418]: 2026-02-01 15:13:36.277243099 +0000 UTC m=+0.034256142 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:13:36 compute-0 podman[244418]: 2026-02-01 15:13:36.377140621 +0000 UTC m=+0.134153604 container init 54151a97dcf7f21b55f385b3a581acfe533bc6987864946d2885ef6a1917572c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kirch, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:13:36 compute-0 podman[244418]: 2026-02-01 15:13:36.381804952 +0000 UTC m=+0.138817885 container start 54151a97dcf7f21b55f385b3a581acfe533bc6987864946d2885ef6a1917572c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 01 15:13:36 compute-0 pedantic_kirch[244435]: 167 167
Feb 01 15:13:36 compute-0 podman[244418]: 2026-02-01 15:13:36.386942376 +0000 UTC m=+0.143955329 container attach 54151a97dcf7f21b55f385b3a581acfe533bc6987864946d2885ef6a1917572c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kirch, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:13:36 compute-0 systemd[1]: libpod-54151a97dcf7f21b55f385b3a581acfe533bc6987864946d2885ef6a1917572c.scope: Deactivated successfully.
Feb 01 15:13:36 compute-0 podman[244418]: 2026-02-01 15:13:36.38743217 +0000 UTC m=+0.144445133 container died 54151a97dcf7f21b55f385b3a581acfe533bc6987864946d2885ef6a1917572c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kirch, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:13:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9137cacc871f2559aed242ca0750c403cc9737bb3c7c1023d3f60b851e089c1-merged.mount: Deactivated successfully.
Feb 01 15:13:36 compute-0 podman[244418]: 2026-02-01 15:13:36.434536271 +0000 UTC m=+0.191549234 container remove 54151a97dcf7f21b55f385b3a581acfe533bc6987864946d2885ef6a1917572c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kirch, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 01 15:13:36 compute-0 systemd[1]: libpod-conmon-54151a97dcf7f21b55f385b3a581acfe533bc6987864946d2885ef6a1917572c.scope: Deactivated successfully.
Feb 01 15:13:36 compute-0 podman[244459]: 2026-02-01 15:13:36.614834638 +0000 UTC m=+0.048276065 container create a5bd96ccc9ede914ef9a3bc2bc37601a62ad5f050d8050825ddc3fae58e7dfd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 01 15:13:36 compute-0 systemd[1]: Started libpod-conmon-a5bd96ccc9ede914ef9a3bc2bc37601a62ad5f050d8050825ddc3fae58e7dfd7.scope.
Feb 01 15:13:36 compute-0 podman[244459]: 2026-02-01 15:13:36.592268615 +0000 UTC m=+0.025710082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:13:36 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a109bd3646ad2e7fecadc408232fb746f32a6b4aaed215300645200095faa8dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a109bd3646ad2e7fecadc408232fb746f32a6b4aaed215300645200095faa8dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a109bd3646ad2e7fecadc408232fb746f32a6b4aaed215300645200095faa8dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a109bd3646ad2e7fecadc408232fb746f32a6b4aaed215300645200095faa8dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:13:36 compute-0 podman[244459]: 2026-02-01 15:13:36.711088628 +0000 UTC m=+0.144530075 container init a5bd96ccc9ede914ef9a3bc2bc37601a62ad5f050d8050825ddc3fae58e7dfd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Feb 01 15:13:36 compute-0 podman[244459]: 2026-02-01 15:13:36.719400231 +0000 UTC m=+0.152841658 container start a5bd96ccc9ede914ef9a3bc2bc37601a62ad5f050d8050825ddc3fae58e7dfd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 01 15:13:36 compute-0 podman[244459]: 2026-02-01 15:13:36.72363888 +0000 UTC m=+0.157080307 container attach a5bd96ccc9ede914ef9a3bc2bc37601a62ad5f050d8050825ddc3fae58e7dfd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Feb 01 15:13:36 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dc838023-ada6-4f22-947b-32f93b678270", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:36 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dc838023-ada6-4f22-947b-32f93b678270", "format": "json"}]: dispatch
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]: {
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:     "0": [
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:         {
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "devices": [
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "/dev/loop3"
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             ],
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "lv_name": "ceph_lv0",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "lv_size": "21470642176",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "name": "ceph_lv0",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "tags": {
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.cluster_name": "ceph",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.crush_device_class": "",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.encrypted": "0",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.objectstore": "bluestore",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.osd_id": "0",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.type": "block",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.vdo": "0",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.with_tpm": "0"
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             },
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "type": "block",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "vg_name": "ceph_vg0"
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:         }
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:     ],
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:     "1": [
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:         {
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "devices": [
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "/dev/loop4"
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             ],
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "lv_name": "ceph_lv1",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "lv_size": "21470642176",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "name": "ceph_lv1",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "tags": {
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.cluster_name": "ceph",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.crush_device_class": "",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.encrypted": "0",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.objectstore": "bluestore",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.osd_id": "1",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.type": "block",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.vdo": "0",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.with_tpm": "0"
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             },
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "type": "block",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "vg_name": "ceph_vg1"
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:         }
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:     ],
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:     "2": [
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:         {
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "devices": [
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "/dev/loop5"
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             ],
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "lv_name": "ceph_lv2",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "lv_size": "21470642176",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "name": "ceph_lv2",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "tags": {
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.cluster_name": "ceph",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.crush_device_class": "",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.encrypted": "0",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.objectstore": "bluestore",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.osd_id": "2",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.type": "block",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.vdo": "0",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:                 "ceph.with_tpm": "0"
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             },
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "type": "block",
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:             "vg_name": "ceph_vg2"
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:         }
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]:     ]
Feb 01 15:13:36 compute-0 zen_kapitsa[244475]: }
Feb 01 15:13:37 compute-0 systemd[1]: libpod-a5bd96ccc9ede914ef9a3bc2bc37601a62ad5f050d8050825ddc3fae58e7dfd7.scope: Deactivated successfully.
Feb 01 15:13:37 compute-0 podman[244459]: 2026-02-01 15:13:37.004973962 +0000 UTC m=+0.438415389 container died a5bd96ccc9ede914ef9a3bc2bc37601a62ad5f050d8050825ddc3fae58e7dfd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:13:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a109bd3646ad2e7fecadc408232fb746f32a6b4aaed215300645200095faa8dc-merged.mount: Deactivated successfully.
Feb 01 15:13:37 compute-0 podman[244459]: 2026-02-01 15:13:37.050360445 +0000 UTC m=+0.483801852 container remove a5bd96ccc9ede914ef9a3bc2bc37601a62ad5f050d8050825ddc3fae58e7dfd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 01 15:13:37 compute-0 systemd[1]: libpod-conmon-a5bd96ccc9ede914ef9a3bc2bc37601a62ad5f050d8050825ddc3fae58e7dfd7.scope: Deactivated successfully.
Feb 01 15:13:37 compute-0 sudo[244381]: pam_unix(sudo:session): session closed for user root
Feb 01 15:13:37 compute-0 sudo[244496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:13:37 compute-0 sudo[244496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:13:37 compute-0 sudo[244496]: pam_unix(sudo:session): session closed for user root
Feb 01 15:13:37 compute-0 sudo[244521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 15:13:37 compute-0 sudo[244521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:13:37 compute-0 podman[244558]: 2026-02-01 15:13:37.431910187 +0000 UTC m=+0.039104007 container create d5a8e9f69769007fc1c969e47219bff98d913a5819d164a9e5ba98f30b3f1134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_feistel, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:13:37 compute-0 systemd[1]: Started libpod-conmon-d5a8e9f69769007fc1c969e47219bff98d913a5819d164a9e5ba98f30b3f1134.scope.
Feb 01 15:13:37 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:13:37 compute-0 podman[244558]: 2026-02-01 15:13:37.488204357 +0000 UTC m=+0.095398167 container init d5a8e9f69769007fc1c969e47219bff98d913a5819d164a9e5ba98f30b3f1134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_feistel, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 01 15:13:37 compute-0 podman[244558]: 2026-02-01 15:13:37.492687772 +0000 UTC m=+0.099881582 container start d5a8e9f69769007fc1c969e47219bff98d913a5819d164a9e5ba98f30b3f1134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:13:37 compute-0 podman[244558]: 2026-02-01 15:13:37.495330946 +0000 UTC m=+0.102524766 container attach d5a8e9f69769007fc1c969e47219bff98d913a5819d164a9e5ba98f30b3f1134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:13:37 compute-0 romantic_feistel[244574]: 167 167
Feb 01 15:13:37 compute-0 systemd[1]: libpod-d5a8e9f69769007fc1c969e47219bff98d913a5819d164a9e5ba98f30b3f1134.scope: Deactivated successfully.
Feb 01 15:13:37 compute-0 podman[244558]: 2026-02-01 15:13:37.497191589 +0000 UTC m=+0.104385409 container died d5a8e9f69769007fc1c969e47219bff98d913a5819d164a9e5ba98f30b3f1134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 01 15:13:37 compute-0 podman[244558]: 2026-02-01 15:13:37.416437013 +0000 UTC m=+0.023630863 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:13:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-687d16589ead9d3db39734e9a293238d3c82aa9dafe8bef29f066431983b402b-merged.mount: Deactivated successfully.
Feb 01 15:13:37 compute-0 podman[244558]: 2026-02-01 15:13:37.675677385 +0000 UTC m=+0.282871235 container remove d5a8e9f69769007fc1c969e47219bff98d913a5819d164a9e5ba98f30b3f1134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_feistel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:13:37 compute-0 systemd[1]: libpod-conmon-d5a8e9f69769007fc1c969e47219bff98d913a5819d164a9e5ba98f30b3f1134.scope: Deactivated successfully.
Feb 01 15:13:37 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c14d6f49-4f6c-4972-908b-48b473f08bc0", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:37 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:c14d6f49-4f6c-4972-908b-48b473f08bc0, vol_name:cephfs) < ""
Feb 01 15:13:37 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c14d6f49-4f6c-4972-908b-48b473f08bc0/2f852e97-4db6-4d48-a89b-3b24b6eaae9b'.
Feb 01 15:13:37 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c14d6f49-4f6c-4972-908b-48b473f08bc0/.meta.tmp'
Feb 01 15:13:37 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c14d6f49-4f6c-4972-908b-48b473f08bc0/.meta.tmp' to config b'/volumes/_nogroup/c14d6f49-4f6c-4972-908b-48b473f08bc0/.meta'
Feb 01 15:13:37 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:c14d6f49-4f6c-4972-908b-48b473f08bc0, vol_name:cephfs) < ""
Feb 01 15:13:37 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c14d6f49-4f6c-4972-908b-48b473f08bc0", "format": "json"}]: dispatch
Feb 01 15:13:37 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c14d6f49-4f6c-4972-908b-48b473f08bc0, vol_name:cephfs) < ""
Feb 01 15:13:37 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c14d6f49-4f6c-4972-908b-48b473f08bc0, vol_name:cephfs) < ""
Feb 01 15:13:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:13:37 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:37 compute-0 podman[244598]: 2026-02-01 15:13:37.82201502 +0000 UTC m=+0.041002911 container create 146dc126b694653f76e8410cbbb5278df45808292ab406275baa7f2f4330d220 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_payne, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:13:37 compute-0 systemd[1]: Started libpod-conmon-146dc126b694653f76e8410cbbb5278df45808292ab406275baa7f2f4330d220.scope.
Feb 01 15:13:37 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9efa0ddf9dd4a468818efe742a65e716ef6ad10fc7c15087328fcb0bc1e0f9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9efa0ddf9dd4a468818efe742a65e716ef6ad10fc7c15087328fcb0bc1e0f9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9efa0ddf9dd4a468818efe742a65e716ef6ad10fc7c15087328fcb0bc1e0f9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9efa0ddf9dd4a468818efe742a65e716ef6ad10fc7c15087328fcb0bc1e0f9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:13:37 compute-0 ceph-mon[75179]: pgmap v817: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 17 KiB/s wr, 6 op/s
Feb 01 15:13:37 compute-0 podman[244598]: 2026-02-01 15:13:37.800135896 +0000 UTC m=+0.019123857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:13:37 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:37 compute-0 podman[244598]: 2026-02-01 15:13:37.919195866 +0000 UTC m=+0.138183787 container init 146dc126b694653f76e8410cbbb5278df45808292ab406275baa7f2f4330d220 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_payne, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:13:37 compute-0 podman[244598]: 2026-02-01 15:13:37.925259016 +0000 UTC m=+0.144246907 container start 146dc126b694653f76e8410cbbb5278df45808292ab406275baa7f2f4330d220 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_payne, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:13:37 compute-0 podman[244598]: 2026-02-01 15:13:37.928792675 +0000 UTC m=+0.147780606 container attach 146dc126b694653f76e8410cbbb5278df45808292ab406275baa7f2f4330d220 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 01 15:13:38 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 17 KiB/s wr, 6 op/s
Feb 01 15:13:38 compute-0 lvm[244691]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:13:38 compute-0 lvm[244693]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:13:38 compute-0 lvm[244693]: VG ceph_vg1 finished
Feb 01 15:13:38 compute-0 lvm[244691]: VG ceph_vg0 finished
Feb 01 15:13:38 compute-0 lvm[244695]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:13:38 compute-0 lvm[244695]: VG ceph_vg2 finished
Feb 01 15:13:38 compute-0 hardcore_payne[244614]: {}
Feb 01 15:13:38 compute-0 systemd[1]: libpod-146dc126b694653f76e8410cbbb5278df45808292ab406275baa7f2f4330d220.scope: Deactivated successfully.
Feb 01 15:13:38 compute-0 systemd[1]: libpod-146dc126b694653f76e8410cbbb5278df45808292ab406275baa7f2f4330d220.scope: Consumed 1.007s CPU time.
Feb 01 15:13:38 compute-0 podman[244598]: 2026-02-01 15:13:38.618749639 +0000 UTC m=+0.837737560 container died 146dc126b694653f76e8410cbbb5278df45808292ab406275baa7f2f4330d220 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 01 15:13:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9efa0ddf9dd4a468818efe742a65e716ef6ad10fc7c15087328fcb0bc1e0f9a-merged.mount: Deactivated successfully.
Feb 01 15:13:38 compute-0 podman[244598]: 2026-02-01 15:13:38.682456646 +0000 UTC m=+0.901444527 container remove 146dc126b694653f76e8410cbbb5278df45808292ab406275baa7f2f4330d220 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Feb 01 15:13:38 compute-0 systemd[1]: libpod-conmon-146dc126b694653f76e8410cbbb5278df45808292ab406275baa7f2f4330d220.scope: Deactivated successfully.
Feb 01 15:13:38 compute-0 sudo[244521]: pam_unix(sudo:session): session closed for user root
Feb 01 15:13:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:13:38 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:13:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:13:38 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:13:38 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1cd77113-e6d6-4345-8483-5f1b1ddb866c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:38 compute-0 sudo[244711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 15:13:38 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb 01 15:13:38 compute-0 sudo[244711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:13:38 compute-0 sudo[244711]: pam_unix(sudo:session): session closed for user root
Feb 01 15:13:38 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/1cd77113-e6d6-4345-8483-5f1b1ddb866c/366c0b13-828c-411b-9570-e2a15ce26320'.
Feb 01 15:13:38 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1cd77113-e6d6-4345-8483-5f1b1ddb866c/.meta.tmp'
Feb 01 15:13:38 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1cd77113-e6d6-4345-8483-5f1b1ddb866c/.meta.tmp' to config b'/volumes/_nogroup/1cd77113-e6d6-4345-8483-5f1b1ddb866c/.meta'
Feb 01 15:13:38 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb 01 15:13:38 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1cd77113-e6d6-4345-8483-5f1b1ddb866c", "format": "json"}]: dispatch
Feb 01 15:13:38 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb 01 15:13:38 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb 01 15:13:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:13:38 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:38 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c14d6f49-4f6c-4972-908b-48b473f08bc0", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:38 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c14d6f49-4f6c-4972-908b-48b473f08bc0", "format": "json"}]: dispatch
Feb 01 15:13:38 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:13:38 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:13:38 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:39 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dc838023-ada6-4f22-947b-32f93b678270", "format": "json"}]: dispatch
Feb 01 15:13:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dc838023-ada6-4f22-947b-32f93b678270, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:13:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dc838023-ada6-4f22-947b-32f93b678270, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:13:39 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dc838023-ada6-4f22-947b-32f93b678270' of type subvolume
Feb 01 15:13:39 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:39.297+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dc838023-ada6-4f22-947b-32f93b678270' of type subvolume
Feb 01 15:13:39 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dc838023-ada6-4f22-947b-32f93b678270", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dc838023-ada6-4f22-947b-32f93b678270, vol_name:cephfs) < ""
Feb 01 15:13:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/dc838023-ada6-4f22-947b-32f93b678270'' moved to trashcan
Feb 01 15:13:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:13:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dc838023-ada6-4f22-947b-32f93b678270, vol_name:cephfs) < ""
Feb 01 15:13:39 compute-0 ceph-mon[75179]: pgmap v818: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 17 KiB/s wr, 6 op/s
Feb 01 15:13:39 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1cd77113-e6d6-4345-8483-5f1b1ddb866c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:39 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1cd77113-e6d6-4345-8483-5f1b1ddb866c", "format": "json"}]: dispatch
Feb 01 15:13:40 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 17 KiB/s wr, 6 op/s
Feb 01 15:13:40 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dc838023-ada6-4f22-947b-32f93b678270", "format": "json"}]: dispatch
Feb 01 15:13:40 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dc838023-ada6-4f22-947b-32f93b678270", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:13:41 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2dbe4c39-4709-4ce8-bbd5-f96172636c6f", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:41 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:2dbe4c39-4709-4ce8-bbd5-f96172636c6f, vol_name:cephfs) < ""
Feb 01 15:13:41 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/2dbe4c39-4709-4ce8-bbd5-f96172636c6f/6f2ed66e-0dd2-4363-8246-72938d7418e0'.
Feb 01 15:13:41 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2dbe4c39-4709-4ce8-bbd5-f96172636c6f/.meta.tmp'
Feb 01 15:13:41 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2dbe4c39-4709-4ce8-bbd5-f96172636c6f/.meta.tmp' to config b'/volumes/_nogroup/2dbe4c39-4709-4ce8-bbd5-f96172636c6f/.meta'
Feb 01 15:13:41 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:2dbe4c39-4709-4ce8-bbd5-f96172636c6f, vol_name:cephfs) < ""
Feb 01 15:13:41 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2dbe4c39-4709-4ce8-bbd5-f96172636c6f", "format": "json"}]: dispatch
Feb 01 15:13:41 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2dbe4c39-4709-4ce8-bbd5-f96172636c6f, vol_name:cephfs) < ""
Feb 01 15:13:41 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2dbe4c39-4709-4ce8-bbd5-f96172636c6f, vol_name:cephfs) < ""
Feb 01 15:13:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:13:41 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:42 compute-0 ceph-mon[75179]: pgmap v819: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 17 KiB/s wr, 6 op/s
Feb 01 15:13:42 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:42 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 182 B/s rd, 20 KiB/s wr, 5 op/s
Feb 01 15:13:42 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1cd77113-e6d6-4345-8483-5f1b1ddb866c", "snap_name": "7191da11-ab02-4a73-964f-85bc2cf8226c", "format": "json"}]: dispatch
Feb 01 15:13:42 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7191da11-ab02-4a73-964f-85bc2cf8226c, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb 01 15:13:42 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7191da11-ab02-4a73-964f-85bc2cf8226c, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb 01 15:13:43 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2dbe4c39-4709-4ce8-bbd5-f96172636c6f", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:43 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2dbe4c39-4709-4ce8-bbd5-f96172636c6f", "format": "json"}]: dispatch
Feb 01 15:13:43 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5a46fbbf-9ff7-4e87-be5d-e5e24f824870", "format": "json"}]: dispatch
Feb 01 15:13:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5a46fbbf-9ff7-4e87-be5d-e5e24f824870, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:13:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5a46fbbf-9ff7-4e87-be5d-e5e24f824870, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:13:43 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5a46fbbf-9ff7-4e87-be5d-e5e24f824870_8c23c0e7-dc6a-4f86-92c2-9b90697f38d7", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5a46fbbf-9ff7-4e87-be5d-e5e24f824870_8c23c0e7-dc6a-4f86-92c2-9b90697f38d7, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:13:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb 01 15:13:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb 01 15:13:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5a46fbbf-9ff7-4e87-be5d-e5e24f824870_8c23c0e7-dc6a-4f86-92c2-9b90697f38d7, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:13:43 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5a46fbbf-9ff7-4e87-be5d-e5e24f824870", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5a46fbbf-9ff7-4e87-be5d-e5e24f824870, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:13:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb 01 15:13:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb 01 15:13:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5a46fbbf-9ff7-4e87-be5d-e5e24f824870, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:13:44 compute-0 ceph-mon[75179]: pgmap v820: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 182 B/s rd, 20 KiB/s wr, 5 op/s
Feb 01 15:13:44 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1cd77113-e6d6-4345-8483-5f1b1ddb866c", "snap_name": "7191da11-ab02-4a73-964f-85bc2cf8226c", "format": "json"}]: dispatch
Feb 01 15:13:44 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5a46fbbf-9ff7-4e87-be5d-e5e24f824870", "format": "json"}]: dispatch
Feb 01 15:13:44 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 18 KiB/s wr, 5 op/s
Feb 01 15:13:45 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5a46fbbf-9ff7-4e87-be5d-e5e24f824870_8c23c0e7-dc6a-4f86-92c2-9b90697f38d7", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:45 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5a46fbbf-9ff7-4e87-be5d-e5e24f824870", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:45 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:45 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4, vol_name:cephfs) < ""
Feb 01 15:13:45 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4/072f93e0-e115-4462-882b-057e50ec20e0'.
Feb 01 15:13:45 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4/.meta.tmp'
Feb 01 15:13:45 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4/.meta.tmp' to config b'/volumes/_nogroup/6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4/.meta'
Feb 01 15:13:45 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4, vol_name:cephfs) < ""
Feb 01 15:13:45 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4", "format": "json"}]: dispatch
Feb 01 15:13:45 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4, vol_name:cephfs) < ""
Feb 01 15:13:45 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4, vol_name:cephfs) < ""
Feb 01 15:13:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:13:45 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:45 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c14d6f49-4f6c-4972-908b-48b473f08bc0", "format": "json"}]: dispatch
Feb 01 15:13:45 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c14d6f49-4f6c-4972-908b-48b473f08bc0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:13:45 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c14d6f49-4f6c-4972-908b-48b473f08bc0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:13:45 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:45.505+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c14d6f49-4f6c-4972-908b-48b473f08bc0' of type subvolume
Feb 01 15:13:45 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c14d6f49-4f6c-4972-908b-48b473f08bc0' of type subvolume
Feb 01 15:13:45 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c14d6f49-4f6c-4972-908b-48b473f08bc0", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:45 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c14d6f49-4f6c-4972-908b-48b473f08bc0, vol_name:cephfs) < ""
Feb 01 15:13:45 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c14d6f49-4f6c-4972-908b-48b473f08bc0'' moved to trashcan
Feb 01 15:13:45 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:13:45 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c14d6f49-4f6c-4972-908b-48b473f08bc0, vol_name:cephfs) < ""
Feb 01 15:13:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:13:46 compute-0 ceph-mon[75179]: pgmap v821: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 18 KiB/s wr, 5 op/s
Feb 01 15:13:46 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:13:46 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4", "format": "json"}]: dispatch
Feb 01 15:13:46 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:13:46 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c14d6f49-4f6c-4972-908b-48b473f08bc0", "format": "json"}]: dispatch
Feb 01 15:13:46 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c14d6f49-4f6c-4972-908b-48b473f08bc0", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:46 compute-0 nova_compute[238794]: 2026-02-01 15:13:46.191 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:13:46 compute-0 nova_compute[238794]: 2026-02-01 15:13:46.192 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:13:46 compute-0 nova_compute[238794]: 2026-02-01 15:13:46.192 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 01 15:13:46 compute-0 nova_compute[238794]: 2026-02-01 15:13:46.192 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 01 15:13:46 compute-0 nova_compute[238794]: 2026-02-01 15:13:46.258 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 01 15:13:46 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 34 KiB/s wr, 10 op/s
Feb 01 15:13:46 compute-0 nova_compute[238794]: 2026-02-01 15:13:46.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:13:46 compute-0 nova_compute[238794]: 2026-02-01 15:13:46.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:13:47 compute-0 ceph-mon[75179]: pgmap v822: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 34 KiB/s wr, 10 op/s
Feb 01 15:13:47 compute-0 nova_compute[238794]: 2026-02-01 15:13:47.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:13:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 27 KiB/s wr, 7 op/s
Feb 01 15:13:48 compute-0 nova_compute[238794]: 2026-02-01 15:13:48.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:13:48 compute-0 nova_compute[238794]: 2026-02-01 15:13:48.319 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 01 15:13:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:13:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:13:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:13:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:13:48 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2dbe4c39-4709-4ce8-bbd5-f96172636c6f", "format": "json"}]: dispatch
Feb 01 15:13:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2dbe4c39-4709-4ce8-bbd5-f96172636c6f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:13:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2dbe4c39-4709-4ce8-bbd5-f96172636c6f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:13:48 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2dbe4c39-4709-4ce8-bbd5-f96172636c6f' of type subvolume
Feb 01 15:13:48 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:48.832+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2dbe4c39-4709-4ce8-bbd5-f96172636c6f' of type subvolume
Feb 01 15:13:48 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2dbe4c39-4709-4ce8-bbd5-f96172636c6f", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2dbe4c39-4709-4ce8-bbd5-f96172636c6f, vol_name:cephfs) < ""
Feb 01 15:13:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2dbe4c39-4709-4ce8-bbd5-f96172636c6f'' moved to trashcan
Feb 01 15:13:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:13:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2dbe4c39-4709-4ce8-bbd5-f96172636c6f, vol_name:cephfs) < ""
Feb 01 15:13:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:13:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:13:49 compute-0 nova_compute[238794]: 2026-02-01 15:13:49.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:13:49 compute-0 nova_compute[238794]: 2026-02-01 15:13:49.321 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:13:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Feb 01 15:13:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Feb 01 15:13:49 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Feb 01 15:13:49 compute-0 ceph-mon[75179]: pgmap v823: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 27 KiB/s wr, 7 op/s
Feb 01 15:13:49 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2dbe4c39-4709-4ce8-bbd5-f96172636c6f", "format": "json"}]: dispatch
Feb 01 15:13:49 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2dbe4c39-4709-4ce8-bbd5-f96172636c6f", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:50 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4", "format": "json"}]: dispatch
Feb 01 15:13:50 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:13:50 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:13:50 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:50.257+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4' of type subvolume
Feb 01 15:13:50 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4' of type subvolume
Feb 01 15:13:50 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:50 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4, vol_name:cephfs) < ""
Feb 01 15:13:50 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4'' moved to trashcan
Feb 01 15:13:50 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:13:50 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4, vol_name:cephfs) < ""
Feb 01 15:13:50 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 33 KiB/s wr, 8 op/s
Feb 01 15:13:50 compute-0 nova_compute[238794]: 2026-02-01 15:13:50.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:13:50 compute-0 nova_compute[238794]: 2026-02-01 15:13:50.341 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:13:50 compute-0 nova_compute[238794]: 2026-02-01 15:13:50.341 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:13:50 compute-0 nova_compute[238794]: 2026-02-01 15:13:50.341 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:13:50 compute-0 nova_compute[238794]: 2026-02-01 15:13:50.341 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 01 15:13:50 compute-0 nova_compute[238794]: 2026-02-01 15:13:50.342 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:13:50 compute-0 ceph-mon[75179]: osdmap e128: 3 total, 3 up, 3 in
Feb 01 15:13:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:13:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1577318470' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:13:50 compute-0 nova_compute[238794]: 2026-02-01 15:13:50.815 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:13:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 01 15:13:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/954235550' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:13:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 01 15:13:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/954235550' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:13:50 compute-0 nova_compute[238794]: 2026-02-01 15:13:50.966 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 01 15:13:50 compute-0 nova_compute[238794]: 2026-02-01 15:13:50.967 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5113MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 01 15:13:50 compute-0 nova_compute[238794]: 2026-02-01 15:13:50.967 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:13:50 compute-0 nova_compute[238794]: 2026-02-01 15:13:50.967 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:13:51 compute-0 nova_compute[238794]: 2026-02-01 15:13:51.029 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 01 15:13:51 compute-0 nova_compute[238794]: 2026-02-01 15:13:51.029 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 01 15:13:51 compute-0 nova_compute[238794]: 2026-02-01 15:13:51.043 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:13:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:13:51 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4", "format": "json"}]: dispatch
Feb 01 15:13:51 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6044dbcf-cf6c-4aa2-a3c7-fa7e2cb2faf4", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:51 compute-0 ceph-mon[75179]: pgmap v825: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 33 KiB/s wr, 8 op/s
Feb 01 15:13:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1577318470' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:13:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/954235550' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:13:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/954235550' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:13:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:13:51 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1432741199' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:13:51 compute-0 nova_compute[238794]: 2026-02-01 15:13:51.581 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:13:51 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5f853247-8fbf-41cc-a044-d26afb9421d6", "format": "json"}]: dispatch
Feb 01 15:13:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5f853247-8fbf-41cc-a044-d26afb9421d6, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:13:51 compute-0 nova_compute[238794]: 2026-02-01 15:13:51.586 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 01 15:13:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5f853247-8fbf-41cc-a044-d26afb9421d6, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:13:51 compute-0 nova_compute[238794]: 2026-02-01 15:13:51.598 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 01 15:13:51 compute-0 nova_compute[238794]: 2026-02-01 15:13:51.599 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 01 15:13:51 compute-0 nova_compute[238794]: 2026-02-01 15:13:51.600 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:13:52 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 31 KiB/s wr, 8 op/s
Feb 01 15:13:52 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1432741199' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:13:52 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5f853247-8fbf-41cc-a044-d26afb9421d6", "format": "json"}]: dispatch
Feb 01 15:13:53 compute-0 ceph-mon[75179]: pgmap v826: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 31 KiB/s wr, 8 op/s
Feb 01 15:13:53 compute-0 podman[244781]: 2026-02-01 15:13:53.97024103 +0000 UTC m=+0.054531601 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 01 15:13:54 compute-0 podman[244782]: 2026-02-01 15:13:54.031173869 +0000 UTC m=+0.108555226 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 01 15:13:54 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 31 KiB/s wr, 8 op/s
Feb 01 15:13:54 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1cd77113-e6d6-4345-8483-5f1b1ddb866c", "snap_name": "7191da11-ab02-4a73-964f-85bc2cf8226c_765a0401-6123-4825-9702-0df27b7178b8", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7191da11-ab02-4a73-964f-85bc2cf8226c_765a0401-6123-4825-9702-0df27b7178b8, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb 01 15:13:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1cd77113-e6d6-4345-8483-5f1b1ddb866c/.meta.tmp'
Feb 01 15:13:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1cd77113-e6d6-4345-8483-5f1b1ddb866c/.meta.tmp' to config b'/volumes/_nogroup/1cd77113-e6d6-4345-8483-5f1b1ddb866c/.meta'
Feb 01 15:13:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7191da11-ab02-4a73-964f-85bc2cf8226c_765a0401-6123-4825-9702-0df27b7178b8, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb 01 15:13:54 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1cd77113-e6d6-4345-8483-5f1b1ddb866c", "snap_name": "7191da11-ab02-4a73-964f-85bc2cf8226c", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7191da11-ab02-4a73-964f-85bc2cf8226c, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb 01 15:13:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1cd77113-e6d6-4345-8483-5f1b1ddb866c/.meta.tmp'
Feb 01 15:13:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1cd77113-e6d6-4345-8483-5f1b1ddb866c/.meta.tmp' to config b'/volumes/_nogroup/1cd77113-e6d6-4345-8483-5f1b1ddb866c/.meta'
Feb 01 15:13:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7191da11-ab02-4a73-964f-85bc2cf8226c, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb 01 15:13:55 compute-0 ceph-mon[75179]: pgmap v827: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 31 KiB/s wr, 8 op/s
Feb 01 15:13:55 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1cd77113-e6d6-4345-8483-5f1b1ddb866c", "snap_name": "7191da11-ab02-4a73-964f-85bc2cf8226c_765a0401-6123-4825-9702-0df27b7178b8", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:55 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1cd77113-e6d6-4345-8483-5f1b1ddb866c", "snap_name": "7191da11-ab02-4a73-964f-85bc2cf8226c", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:55 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "fbe6d350-ea63-4fce-8220-3c83f15d3afc", "format": "json"}]: dispatch
Feb 01 15:13:55 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fbe6d350-ea63-4fce-8220-3c83f15d3afc, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:13:55 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fbe6d350-ea63-4fce-8220-3c83f15d3afc, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:13:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 01 15:13:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Feb 01 15:13:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Feb 01 15:13:56 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Feb 01 15:13:56 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 28 KiB/s wr, 8 op/s
Feb 01 15:13:56 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "fbe6d350-ea63-4fce-8220-3c83f15d3afc", "format": "json"}]: dispatch
Feb 01 15:13:56 compute-0 ceph-mon[75179]: osdmap e129: 3 total, 3 up, 3 in
Feb 01 15:13:57 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Feb 01 15:13:57 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Feb 01 15:13:57 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Feb 01 15:13:57 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "f1f3c043-c4de-4c8e-b742-6b2aba8a90bd", "format": "json"}]: dispatch
Feb 01 15:13:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f1f3c043-c4de-4c8e-b742-6b2aba8a90bd, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:13:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f1f3c043-c4de-4c8e-b742-6b2aba8a90bd, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:13:57 compute-0 ceph-mon[75179]: pgmap v829: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 28 KiB/s wr, 8 op/s
Feb 01 15:13:57 compute-0 ceph-mon[75179]: osdmap e130: 3 total, 3 up, 3 in
Feb 01 15:13:58 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1cd77113-e6d6-4345-8483-5f1b1ddb866c", "format": "json"}]: dispatch
Feb 01 15:13:58 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:13:58 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:13:58 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1cd77113-e6d6-4345-8483-5f1b1ddb866c' of type subvolume
Feb 01 15:13:58 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:13:58.138+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1cd77113-e6d6-4345-8483-5f1b1ddb866c' of type subvolume
Feb 01 15:13:58 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1cd77113-e6d6-4345-8483-5f1b1ddb866c", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:58 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb 01 15:13:58 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1cd77113-e6d6-4345-8483-5f1b1ddb866c'' moved to trashcan
Feb 01 15:13:58 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:13:58 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1cd77113-e6d6-4345-8483-5f1b1ddb866c, vol_name:cephfs) < ""
Feb 01 15:13:58 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 28 KiB/s wr, 8 op/s
Feb 01 15:13:58 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "f1f3c043-c4de-4c8e-b742-6b2aba8a90bd", "format": "json"}]: dispatch
Feb 01 15:13:59 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1cd77113-e6d6-4345-8483-5f1b1ddb866c", "format": "json"}]: dispatch
Feb 01 15:13:59 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1cd77113-e6d6-4345-8483-5f1b1ddb866c", "force": true, "format": "json"}]: dispatch
Feb 01 15:13:59 compute-0 ceph-mon[75179]: pgmap v831: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 28 KiB/s wr, 8 op/s
Feb 01 15:14:00 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 13 KiB/s wr, 4 op/s
Feb 01 15:14:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:14:01 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5f8d76a2-690b-4d7e-8c67-40b563fa4add", "format": "json"}]: dispatch
Feb 01 15:14:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5f8d76a2-690b-4d7e-8c67-40b563fa4add, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5f8d76a2-690b-4d7e-8c67-40b563fa4add, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:01 compute-0 anacron[162651]: Job `cron.daily' started
Feb 01 15:14:01 compute-0 anacron[162651]: Job `cron.daily' terminated
Feb 01 15:14:01 compute-0 ceph-mon[75179]: pgmap v832: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 13 KiB/s wr, 4 op/s
Feb 01 15:14:01 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b6c72970-f609-412a-968d-5d3fe02bddc0", "format": "json"}]: dispatch
Feb 01 15:14:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b6c72970-f609-412a-968d-5d3fe02bddc0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:14:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b6c72970-f609-412a-968d-5d3fe02bddc0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:14:01 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b6c72970-f609-412a-968d-5d3fe02bddc0' of type subvolume
Feb 01 15:14:01 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:14:01.958+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b6c72970-f609-412a-968d-5d3fe02bddc0' of type subvolume
Feb 01 15:14:01 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b6c72970-f609-412a-968d-5d3fe02bddc0", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b6c72970-f609-412a-968d-5d3fe02bddc0, vol_name:cephfs) < ""
Feb 01 15:14:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b6c72970-f609-412a-968d-5d3fe02bddc0'' moved to trashcan
Feb 01 15:14:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:14:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b6c72970-f609-412a-968d-5d3fe02bddc0, vol_name:cephfs) < ""
Feb 01 15:14:02 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "de13f642-4dd6-425f-b2a2-695e92172306", "format": "json"}]: dispatch
Feb 01 15:14:02 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:de13f642-4dd6-425f-b2a2-695e92172306, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:02 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:de13f642-4dd6-425f-b2a2-695e92172306, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:02 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 26 KiB/s wr, 8 op/s
Feb 01 15:14:02 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5f8d76a2-690b-4d7e-8c67-40b563fa4add", "format": "json"}]: dispatch
Feb 01 15:14:02 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b6c72970-f609-412a-968d-5d3fe02bddc0", "format": "json"}]: dispatch
Feb 01 15:14:02 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b6c72970-f609-412a-968d-5d3fe02bddc0", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:03 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "de13f642-4dd6-425f-b2a2-695e92172306", "format": "json"}]: dispatch
Feb 01 15:14:03 compute-0 ceph-mon[75179]: pgmap v833: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 26 KiB/s wr, 8 op/s
Feb 01 15:14:04 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 499 B/s rd, 14 KiB/s wr, 4 op/s
Feb 01 15:14:04 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c19a0244-e063-4af0-8894-414616a3f2b3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:14:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, vol_name:cephfs) < ""
Feb 01 15:14:04 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3/46f111b0-0ab2-4efd-b156-f926f784a2ea'.
Feb 01 15:14:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3/.meta.tmp'
Feb 01 15:14:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3/.meta.tmp' to config b'/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3/.meta'
Feb 01 15:14:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, vol_name:cephfs) < ""
Feb 01 15:14:04 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c19a0244-e063-4af0-8894-414616a3f2b3", "format": "json"}]: dispatch
Feb 01 15:14:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, vol_name:cephfs) < ""
Feb 01 15:14:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, vol_name:cephfs) < ""
Feb 01 15:14:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:14:04 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:14:04 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:14:05 compute-0 ceph-mon[75179]: pgmap v834: 305 pgs: 305 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 499 B/s rd, 14 KiB/s wr, 4 op/s
Feb 01 15:14:05 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c19a0244-e063-4af0-8894-414616a3f2b3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:14:05 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c19a0244-e063-4af0-8894-414616a3f2b3", "format": "json"}]: dispatch
Feb 01 15:14:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:14:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Feb 01 15:14:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Feb 01 15:14:06 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Feb 01 15:14:06 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "de13f642-4dd6-425f-b2a2-695e92172306_999c7871-ceb7-40d5-8403-0424caa50678", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:06 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:de13f642-4dd6-425f-b2a2-695e92172306_999c7871-ceb7-40d5-8403-0424caa50678, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:06 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb 01 15:14:06 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb 01 15:14:06 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:de13f642-4dd6-425f-b2a2-695e92172306_999c7871-ceb7-40d5-8403-0424caa50678, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:06 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "de13f642-4dd6-425f-b2a2-695e92172306", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:06 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:de13f642-4dd6-425f-b2a2-695e92172306, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:06 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb 01 15:14:06 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb 01 15:14:06 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:de13f642-4dd6-425f-b2a2-695e92172306, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:06 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 668 B/s rd, 30 KiB/s wr, 7 op/s
Feb 01 15:14:07 compute-0 ceph-mon[75179]: osdmap e131: 3 total, 3 up, 3 in
Feb 01 15:14:07 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "de13f642-4dd6-425f-b2a2-695e92172306_999c7871-ceb7-40d5-8403-0424caa50678", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:07 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "de13f642-4dd6-425f-b2a2-695e92172306", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:07 compute-0 ceph-mon[75179]: pgmap v836: 305 pgs: 305 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 668 B/s rd, 30 KiB/s wr, 7 op/s
Feb 01 15:14:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:14:07.809 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:14:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:14:07.810 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:14:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:14:07.810 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:14:07 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c19a0244-e063-4af0-8894-414616a3f2b3", "auth_id": "tempest-cephx-id-64491543", "tenant_id": "999f6f2ae9a8410ca0b94eca9aa23d7a", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:14:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-64491543, format:json, prefix:fs subvolume authorize, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, tenant_id:999f6f2ae9a8410ca0b94eca9aa23d7a, vol_name:cephfs) < ""
Feb 01 15:14:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-64491543", "format": "json"} v 0)
Feb 01 15:14:07 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-64491543", "format": "json"} : dispatch
Feb 01 15:14:07 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID tempest-cephx-id-64491543 with tenant 999f6f2ae9a8410ca0b94eca9aa23d7a
Feb 01 15:14:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-64491543", "caps": ["mds", "allow rw path=/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3/46f111b0-0ab2-4efd-b156-f926f784a2ea", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c19a0244-e063-4af0-8894-414616a3f2b3", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:14:07 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-64491543", "caps": ["mds", "allow rw path=/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3/46f111b0-0ab2-4efd-b156-f926f784a2ea", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c19a0244-e063-4af0-8894-414616a3f2b3", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:14:07 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-64491543", "caps": ["mds", "allow rw path=/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3/46f111b0-0ab2-4efd-b156-f926f784a2ea", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c19a0244-e063-4af0-8894-414616a3f2b3", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:14:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-64491543, format:json, prefix:fs subvolume authorize, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, tenant_id:999f6f2ae9a8410ca0b94eca9aa23d7a, vol_name:cephfs) < ""
Feb 01 15:14:08 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c19a0244-e063-4af0-8894-414616a3f2b3", "auth_id": "tempest-cephx-id-64491543", "tenant_id": "999f6f2ae9a8410ca0b94eca9aa23d7a", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:14:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-64491543", "format": "json"} : dispatch
Feb 01 15:14:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-64491543", "caps": ["mds", "allow rw path=/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3/46f111b0-0ab2-4efd-b156-f926f784a2ea", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c19a0244-e063-4af0-8894-414616a3f2b3", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:14:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-64491543", "caps": ["mds", "allow rw path=/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3/46f111b0-0ab2-4efd-b156-f926f784a2ea", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c19a0244-e063-4af0-8894-414616a3f2b3", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:14:08 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 28 KiB/s wr, 6 op/s
Feb 01 15:14:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Feb 01 15:14:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Feb 01 15:14:09 compute-0 ceph-mon[75179]: pgmap v837: 305 pgs: 305 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 28 KiB/s wr, 6 op/s
Feb 01 15:14:09 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c19a0244-e063-4af0-8894-414616a3f2b3", "auth_id": "tempest-cephx-id-64491543", "format": "json"}]: dispatch
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-64491543, format:json, prefix:fs subvolume deauthorize, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, vol_name:cephfs) < ""
Feb 01 15:14:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-64491543", "format": "json"} v 0)
Feb 01 15:14:09 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-64491543", "format": "json"} : dispatch
Feb 01 15:14:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-64491543"} v 0)
Feb 01 15:14:09 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-64491543"} : dispatch
Feb 01 15:14:09 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-64491543"}]': finished
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-64491543, format:json, prefix:fs subvolume deauthorize, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, vol_name:cephfs) < ""
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c19a0244-e063-4af0-8894-414616a3f2b3", "auth_id": "tempest-cephx-id-64491543", "format": "json"}]: dispatch
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-64491543, format:json, prefix:fs subvolume evict, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, vol_name:cephfs) < ""
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-64491543, client_metadata.root=/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3/46f111b0-0ab2-4efd-b156-f926f784a2ea
Feb 01 15:14:09 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=tempest-cephx-id-64491543,client_metadata.root=/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3/46f111b0-0ab2-4efd-b156-f926f784a2ea],prefix=session evict} (starting...)
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-64491543, format:json, prefix:fs subvolume evict, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, vol_name:cephfs) < ""
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c19a0244-e063-4af0-8894-414616a3f2b3", "format": "json"}]: dispatch
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c19a0244-e063-4af0-8894-414616a3f2b3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c19a0244-e063-4af0-8894-414616a3f2b3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c19a0244-e063-4af0-8894-414616a3f2b3' of type subvolume
Feb 01 15:14:09 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:14:09.395+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c19a0244-e063-4af0-8894-414616a3f2b3' of type subvolume
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c19a0244-e063-4af0-8894-414616a3f2b3", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, vol_name:cephfs) < ""
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c19a0244-e063-4af0-8894-414616a3f2b3'' moved to trashcan
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c19a0244-e063-4af0-8894-414616a3f2b3, vol_name:cephfs) < ""
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5f8d76a2-690b-4d7e-8c67-40b563fa4add_0c3f4871-3746-4345-899c-cde05e7ab29a", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5f8d76a2-690b-4d7e-8c67-40b563fa4add_0c3f4871-3746-4345-899c-cde05e7ab29a, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5f8d76a2-690b-4d7e-8c67-40b563fa4add_0c3f4871-3746-4345-899c-cde05e7ab29a, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5f8d76a2-690b-4d7e-8c67-40b563fa4add", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5f8d76a2-690b-4d7e-8c67-40b563fa4add, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb 01 15:14:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5f8d76a2-690b-4d7e-8c67-40b563fa4add, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:10 compute-0 ceph-mon[75179]: osdmap e132: 3 total, 3 up, 3 in
Feb 01 15:14:10 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c19a0244-e063-4af0-8894-414616a3f2b3", "auth_id": "tempest-cephx-id-64491543", "format": "json"}]: dispatch
Feb 01 15:14:10 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-64491543", "format": "json"} : dispatch
Feb 01 15:14:10 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-64491543"} : dispatch
Feb 01 15:14:10 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-64491543"}]': finished
Feb 01 15:14:10 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c19a0244-e063-4af0-8894-414616a3f2b3", "auth_id": "tempest-cephx-id-64491543", "format": "json"}]: dispatch
Feb 01 15:14:10 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c19a0244-e063-4af0-8894-414616a3f2b3", "format": "json"}]: dispatch
Feb 01 15:14:10 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c19a0244-e063-4af0-8894-414616a3f2b3", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:10 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5f8d76a2-690b-4d7e-8c67-40b563fa4add_0c3f4871-3746-4345-899c-cde05e7ab29a", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:10 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5f8d76a2-690b-4d7e-8c67-40b563fa4add", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:10 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 21 KiB/s wr, 4 op/s
Feb 01 15:14:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:14:11 compute-0 ceph-mon[75179]: pgmap v839: 305 pgs: 305 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 21 KiB/s wr, 4 op/s
Feb 01 15:14:12 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 75 KiB/s wr, 12 op/s
Feb 01 15:14:12 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:14:12.497 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:64:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '76:0c:13:64:99:39'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 01 15:14:12 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:14:12.498 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 01 15:14:13 compute-0 ceph-mon[75179]: pgmap v840: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 75 KiB/s wr, 12 op/s
Feb 01 15:14:13 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "f1f3c043-c4de-4c8e-b742-6b2aba8a90bd_2401e4d9-9c2d-4644-befb-68ed41585c58", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f1f3c043-c4de-4c8e-b742-6b2aba8a90bd_2401e4d9-9c2d-4644-befb-68ed41585c58, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb 01 15:14:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb 01 15:14:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f1f3c043-c4de-4c8e-b742-6b2aba8a90bd_2401e4d9-9c2d-4644-befb-68ed41585c58, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:13 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "f1f3c043-c4de-4c8e-b742-6b2aba8a90bd", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f1f3c043-c4de-4c8e-b742-6b2aba8a90bd, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb 01 15:14:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb 01 15:14:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f1f3c043-c4de-4c8e-b742-6b2aba8a90bd, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:14 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 249 B/s rd, 53 KiB/s wr, 8 op/s
Feb 01 15:14:14 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Feb 01 15:14:14 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Feb 01 15:14:14 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Feb 01 15:14:14 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "f1f3c043-c4de-4c8e-b742-6b2aba8a90bd_2401e4d9-9c2d-4644-befb-68ed41585c58", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:14 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "f1f3c043-c4de-4c8e-b742-6b2aba8a90bd", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Feb 01 15:14:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Feb 01 15:14:15 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Feb 01 15:14:15 compute-0 ceph-mon[75179]: pgmap v841: 305 pgs: 305 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 249 B/s rd, 53 KiB/s wr, 8 op/s
Feb 01 15:14:15 compute-0 ceph-mon[75179]: osdmap e133: 3 total, 3 up, 3 in
Feb 01 15:14:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:14:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Feb 01 15:14:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Feb 01 15:14:16 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Feb 01 15:14:16 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 96 KiB/s wr, 17 op/s
Feb 01 15:14:16 compute-0 ceph-mon[75179]: osdmap e134: 3 total, 3 up, 3 in
Feb 01 15:14:16 compute-0 ceph-mon[75179]: osdmap e135: 3 total, 3 up, 3 in
Feb 01 15:14:16 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "fbe6d350-ea63-4fce-8220-3c83f15d3afc_8622dc9d-aad8-45b9-8bf6-4ce20c111ec2", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fbe6d350-ea63-4fce-8220-3c83f15d3afc_8622dc9d-aad8-45b9-8bf6-4ce20c111ec2, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb 01 15:14:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb 01 15:14:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fbe6d350-ea63-4fce-8220-3c83f15d3afc_8622dc9d-aad8-45b9-8bf6-4ce20c111ec2, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:16 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "fbe6d350-ea63-4fce-8220-3c83f15d3afc", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fbe6d350-ea63-4fce-8220-3c83f15d3afc, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb 01 15:14:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb 01 15:14:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fbe6d350-ea63-4fce-8220-3c83f15d3afc, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:17 compute-0 ceph-mon[75179]: pgmap v845: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 96 KiB/s wr, 17 op/s
Feb 01 15:14:17 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "fbe6d350-ea63-4fce-8220-3c83f15d3afc_8622dc9d-aad8-45b9-8bf6-4ce20c111ec2", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:17 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "fbe6d350-ea63-4fce-8220-3c83f15d3afc", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:14:17
Feb 01 15:14:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:14:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:14:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'volumes', '.mgr', 'default.rgw.meta', 'images', 'backups', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'cephfs.cephfs.meta']
Feb 01 15:14:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 25 KiB/s wr, 6 op/s
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "aa2fa960-5933-441d-ba7d-210a851e8867", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aa2fa960-5933-441d-ba7d-210a851e8867, vol_name:cephfs) < ""
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/aa2fa960-5933-441d-ba7d-210a851e8867/102aa16c-b8d8-4a6c-80e3-ae8484f4e160'.
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aa2fa960-5933-441d-ba7d-210a851e8867/.meta.tmp'
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aa2fa960-5933-441d-ba7d-210a851e8867/.meta.tmp' to config b'/volumes/_nogroup/aa2fa960-5933-441d-ba7d-210a851e8867/.meta'
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aa2fa960-5933-441d-ba7d-210a851e8867, vol_name:cephfs) < ""
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "aa2fa960-5933-441d-ba7d-210a851e8867", "format": "json"}]: dispatch
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aa2fa960-5933-441d-ba7d-210a851e8867, vol_name:cephfs) < ""
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aa2fa960-5933-441d-ba7d-210a851e8867, vol_name:cephfs) < ""
Feb 01 15:14:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:14:18 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:14:18 compute-0 sshd-session[244830]: Invalid user sol from 80.94.92.171 port 58904
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:14:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:14:18 compute-0 sshd-session[244830]: Connection closed by invalid user sol 80.94.92.171 port 58904 [preauth]
Feb 01 15:14:19 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:14:19.500 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 01 15:14:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Feb 01 15:14:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Feb 01 15:14:19 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Feb 01 15:14:19 compute-0 ceph-mon[75179]: pgmap v846: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 25 KiB/s wr, 6 op/s
Feb 01 15:14:19 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "aa2fa960-5933-441d-ba7d-210a851e8867", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:14:19 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "aa2fa960-5933-441d-ba7d-210a851e8867", "format": "json"}]: dispatch
Feb 01 15:14:19 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:14:20 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5f853247-8fbf-41cc-a044-d26afb9421d6_6055dc83-b33d-4c1e-b4e8-46b0bf50f6e2", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5f853247-8fbf-41cc-a044-d26afb9421d6_6055dc83-b33d-4c1e-b4e8-46b0bf50f6e2, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb 01 15:14:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb 01 15:14:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5f853247-8fbf-41cc-a044-d26afb9421d6_6055dc83-b33d-4c1e-b4e8-46b0bf50f6e2, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:20 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5f853247-8fbf-41cc-a044-d26afb9421d6", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5f853247-8fbf-41cc-a044-d26afb9421d6, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp'
Feb 01 15:14:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta.tmp' to config b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06/.meta'
Feb 01 15:14:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5f853247-8fbf-41cc-a044-d26afb9421d6, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:20 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 689 B/s rd, 25 KiB/s wr, 6 op/s
Feb 01 15:14:20 compute-0 ceph-mon[75179]: osdmap e136: 3 total, 3 up, 3 in
Feb 01 15:14:21 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "format": "json"}]: dispatch
Feb 01 15:14:21 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:14:21 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:14:21 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0d8696eb-14a5-4abf-b5f8-d5c0093d2c06' of type subvolume
Feb 01 15:14:21 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:14:21.094+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0d8696eb-14a5-4abf-b5f8-d5c0093d2c06' of type subvolume
Feb 01 15:14:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:14:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Feb 01 15:14:21 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:21 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Feb 01 15:14:21 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0d8696eb-14a5-4abf-b5f8-d5c0093d2c06'' moved to trashcan
Feb 01 15:14:21 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Feb 01 15:14:21 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:14:21 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0d8696eb-14a5-4abf-b5f8-d5c0093d2c06, vol_name:cephfs) < ""
Feb 01 15:14:21 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5f853247-8fbf-41cc-a044-d26afb9421d6_6055dc83-b33d-4c1e-b4e8-46b0bf50f6e2", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:21 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "snap_name": "5f853247-8fbf-41cc-a044-d26afb9421d6", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:21 compute-0 ceph-mon[75179]: pgmap v848: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 689 B/s rd, 25 KiB/s wr, 6 op/s
Feb 01 15:14:21 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "format": "json"}]: dispatch
Feb 01 15:14:21 compute-0 ceph-mon[75179]: osdmap e137: 3 total, 3 up, 3 in
Feb 01 15:14:22 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 330 B/s rd, 43 KiB/s wr, 7 op/s
Feb 01 15:14:22 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0d8696eb-14a5-4abf-b5f8-d5c0093d2c06", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:22 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "aa2fa960-5933-441d-ba7d-210a851e8867", "format": "json"}]: dispatch
Feb 01 15:14:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:aa2fa960-5933-441d-ba7d-210a851e8867, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:14:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:aa2fa960-5933-441d-ba7d-210a851e8867, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:14:22 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aa2fa960-5933-441d-ba7d-210a851e8867' of type subvolume
Feb 01 15:14:22 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:14:22.759+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aa2fa960-5933-441d-ba7d-210a851e8867' of type subvolume
Feb 01 15:14:22 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "aa2fa960-5933-441d-ba7d-210a851e8867", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aa2fa960-5933-441d-ba7d-210a851e8867, vol_name:cephfs) < ""
Feb 01 15:14:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/aa2fa960-5933-441d-ba7d-210a851e8867'' moved to trashcan
Feb 01 15:14:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:14:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aa2fa960-5933-441d-ba7d-210a851e8867, vol_name:cephfs) < ""
Feb 01 15:14:23 compute-0 ceph-mon[75179]: pgmap v850: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 330 B/s rd, 43 KiB/s wr, 7 op/s
Feb 01 15:14:23 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "aa2fa960-5933-441d-ba7d-210a851e8867", "format": "json"}]: dispatch
Feb 01 15:14:23 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "aa2fa960-5933-441d-ba7d-210a851e8867", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:23 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "14f2cf47-b452-4ed6-a42d-a978bd461803", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:14:23 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:14f2cf47-b452-4ed6-a42d-a978bd461803, vol_name:cephfs) < ""
Feb 01 15:14:23 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/14f2cf47-b452-4ed6-a42d-a978bd461803/63782069-23d1-48cb-bfe3-e74b20e4e487'.
Feb 01 15:14:23 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/14f2cf47-b452-4ed6-a42d-a978bd461803/.meta.tmp'
Feb 01 15:14:23 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/14f2cf47-b452-4ed6-a42d-a978bd461803/.meta.tmp' to config b'/volumes/_nogroup/14f2cf47-b452-4ed6-a42d-a978bd461803/.meta'
Feb 01 15:14:23 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:14f2cf47-b452-4ed6-a42d-a978bd461803, vol_name:cephfs) < ""
Feb 01 15:14:24 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "14f2cf47-b452-4ed6-a42d-a978bd461803", "format": "json"}]: dispatch
Feb 01 15:14:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:14f2cf47-b452-4ed6-a42d-a978bd461803, vol_name:cephfs) < ""
Feb 01 15:14:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:14f2cf47-b452-4ed6-a42d-a978bd461803, vol_name:cephfs) < ""
Feb 01 15:14:24 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:14:24 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:14:24 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 33 KiB/s wr, 5 op/s
Feb 01 15:14:24 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Feb 01 15:14:24 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Feb 01 15:14:24 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Feb 01 15:14:24 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "14f2cf47-b452-4ed6-a42d-a978bd461803", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:14:24 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "14f2cf47-b452-4ed6-a42d-a978bd461803", "format": "json"}]: dispatch
Feb 01 15:14:24 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:14:24 compute-0 podman[244832]: 2026-02-01 15:14:24.977097269 +0000 UTC m=+0.065707094 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 01 15:14:25 compute-0 podman[244833]: 2026-02-01 15:14:25.008741106 +0000 UTC m=+0.094083240 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 01 15:14:25 compute-0 ceph-mon[75179]: pgmap v851: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 33 KiB/s wr, 5 op/s
Feb 01 15:14:25 compute-0 ceph-mon[75179]: osdmap e138: 3 total, 3 up, 3 in
Feb 01 15:14:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:14:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Feb 01 15:14:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Feb 01 15:14:26 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Feb 01 15:14:26 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 92 KiB/s wr, 14 op/s
Feb 01 15:14:27 compute-0 ceph-mon[75179]: osdmap e139: 3 total, 3 up, 3 in
Feb 01 15:14:27 compute-0 ceph-mon[75179]: pgmap v854: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 92 KiB/s wr, 14 op/s
Feb 01 15:14:27 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "62d0c62a-1088-49db-8483-cc680a52ec63", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:14:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:62d0c62a-1088-49db-8483-cc680a52ec63, vol_name:cephfs) < ""
Feb 01 15:14:27 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/62d0c62a-1088-49db-8483-cc680a52ec63/b3263236-0a75-46ae-ab59-83a44da59eb1'.
Feb 01 15:14:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/62d0c62a-1088-49db-8483-cc680a52ec63/.meta.tmp'
Feb 01 15:14:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/62d0c62a-1088-49db-8483-cc680a52ec63/.meta.tmp' to config b'/volumes/_nogroup/62d0c62a-1088-49db-8483-cc680a52ec63/.meta'
Feb 01 15:14:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:62d0c62a-1088-49db-8483-cc680a52ec63, vol_name:cephfs) < ""
Feb 01 15:14:27 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "62d0c62a-1088-49db-8483-cc680a52ec63", "format": "json"}]: dispatch
Feb 01 15:14:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:62d0c62a-1088-49db-8483-cc680a52ec63, vol_name:cephfs) < ""
Feb 01 15:14:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:62d0c62a-1088-49db-8483-cc680a52ec63, vol_name:cephfs) < ""
Feb 01 15:14:27 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:14:27 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:14:28 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "62d0c62a-1088-49db-8483-cc680a52ec63", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:14:28 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "62d0c62a-1088-49db-8483-cc680a52ec63", "format": "json"}]: dispatch
Feb 01 15:14:28 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659514365146859 of space, bias 1.0, pg target 0.19978543095440576 quantized to 32 (current 32)
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 3.815308953585969e-05 of space, bias 4.0, pg target 0.045783707443031625 quantized to 16 (current 16)
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:14:28 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 710 B/s rd, 41 KiB/s wr, 7 op/s
Feb 01 15:14:29 compute-0 ceph-mon[75179]: pgmap v855: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 710 B/s rd, 41 KiB/s wr, 7 op/s
Feb 01 15:14:30 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 36 KiB/s wr, 5 op/s
Feb 01 15:14:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:14:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Feb 01 15:14:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Feb 01 15:14:31 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Feb 01 15:14:31 compute-0 ceph-mon[75179]: pgmap v856: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 36 KiB/s wr, 5 op/s
Feb 01 15:14:31 compute-0 ceph-mon[75179]: osdmap e140: 3 total, 3 up, 3 in
Feb 01 15:14:31 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "62d0c62a-1088-49db-8483-cc680a52ec63", "format": "json"}]: dispatch
Feb 01 15:14:31 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:62d0c62a-1088-49db-8483-cc680a52ec63, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:14:32 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:62d0c62a-1088-49db-8483-cc680a52ec63, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:14:32 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '62d0c62a-1088-49db-8483-cc680a52ec63' of type subvolume
Feb 01 15:14:32 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:14:32.001+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '62d0c62a-1088-49db-8483-cc680a52ec63' of type subvolume
Feb 01 15:14:32 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "62d0c62a-1088-49db-8483-cc680a52ec63", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:32 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:62d0c62a-1088-49db-8483-cc680a52ec63, vol_name:cephfs) < ""
Feb 01 15:14:32 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/62d0c62a-1088-49db-8483-cc680a52ec63'' moved to trashcan
Feb 01 15:14:32 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:14:32 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:62d0c62a-1088-49db-8483-cc680a52ec63, vol_name:cephfs) < ""
Feb 01 15:14:32 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 660 B/s rd, 50 KiB/s wr, 8 op/s
Feb 01 15:14:32 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "62d0c62a-1088-49db-8483-cc680a52ec63", "format": "json"}]: dispatch
Feb 01 15:14:32 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "62d0c62a-1088-49db-8483-cc680a52ec63", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:33 compute-0 ceph-mon[75179]: pgmap v858: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 660 B/s rd, 50 KiB/s wr, 8 op/s
Feb 01 15:14:34 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 2 op/s
Feb 01 15:14:35 compute-0 ceph-mon[75179]: pgmap v859: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 2 op/s
Feb 01 15:14:35 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "234ef068-f24e-4b9f-8f83-1f4a01701b53", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:14:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:234ef068-f24e-4b9f-8f83-1f4a01701b53, vol_name:cephfs) < ""
Feb 01 15:14:35 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/234ef068-f24e-4b9f-8f83-1f4a01701b53/5f28e54f-2195-45a3-b703-e4eee7e9f6dd'.
Feb 01 15:14:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/234ef068-f24e-4b9f-8f83-1f4a01701b53/.meta.tmp'
Feb 01 15:14:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/234ef068-f24e-4b9f-8f83-1f4a01701b53/.meta.tmp' to config b'/volumes/_nogroup/234ef068-f24e-4b9f-8f83-1f4a01701b53/.meta'
Feb 01 15:14:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:234ef068-f24e-4b9f-8f83-1f4a01701b53, vol_name:cephfs) < ""
Feb 01 15:14:35 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "234ef068-f24e-4b9f-8f83-1f4a01701b53", "format": "json"}]: dispatch
Feb 01 15:14:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:234ef068-f24e-4b9f-8f83-1f4a01701b53, vol_name:cephfs) < ""
Feb 01 15:14:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:234ef068-f24e-4b9f-8f83-1f4a01701b53, vol_name:cephfs) < ""
Feb 01 15:14:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:14:35 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:14:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:14:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Feb 01 15:14:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Feb 01 15:14:36 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Feb 01 15:14:36 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 31 KiB/s wr, 4 op/s
Feb 01 15:14:36 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "234ef068-f24e-4b9f-8f83-1f4a01701b53", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:14:36 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "234ef068-f24e-4b9f-8f83-1f4a01701b53", "format": "json"}]: dispatch
Feb 01 15:14:36 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:14:36 compute-0 ceph-mon[75179]: osdmap e141: 3 total, 3 up, 3 in
Feb 01 15:14:37 compute-0 ceph-mon[75179]: pgmap v861: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 31 KiB/s wr, 4 op/s
Feb 01 15:14:38 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 31 KiB/s wr, 4 op/s
Feb 01 15:14:38 compute-0 sudo[244875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:14:38 compute-0 sudo[244875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:14:38 compute-0 sudo[244875]: pam_unix(sudo:session): session closed for user root
Feb 01 15:14:38 compute-0 sudo[244900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 15:14:38 compute-0 sudo[244900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:14:39 compute-0 sudo[244900]: pam_unix(sudo:session): session closed for user root
Feb 01 15:14:39 compute-0 ceph-mon[75179]: pgmap v862: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 31 KiB/s wr, 4 op/s
Feb 01 15:14:39 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:14:39 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:14:39 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 15:14:39 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:14:39 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 15:14:39 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:14:39 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 15:14:39 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:14:39 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 15:14:39 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:14:39 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:14:39 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:14:39 compute-0 sudo[244958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:14:39 compute-0 sudo[244958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:14:39 compute-0 sudo[244958]: pam_unix(sudo:session): session closed for user root
Feb 01 15:14:39 compute-0 sudo[244983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 15:14:39 compute-0 sudo[244983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:14:39 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:14:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb 01 15:14:39 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/86c82a78-ed68-479a-856d-a96ae3edab27'.
Feb 01 15:14:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta.tmp'
Feb 01 15:14:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta.tmp' to config b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta'
Feb 01 15:14:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb 01 15:14:39 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "format": "json"}]: dispatch
Feb 01 15:14:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb 01 15:14:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb 01 15:14:39 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:14:39 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:14:39 compute-0 podman[245020]: 2026-02-01 15:14:39.863540536 +0000 UTC m=+0.053386828 container create 3b600fc83ec7d3275d4c67d726405b8ed0cd91066931b79c64e87f25519d899e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sutherland, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:14:39 compute-0 systemd[1]: Started libpod-conmon-3b600fc83ec7d3275d4c67d726405b8ed0cd91066931b79c64e87f25519d899e.scope.
Feb 01 15:14:39 compute-0 podman[245020]: 2026-02-01 15:14:39.841996022 +0000 UTC m=+0.031842314 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:14:39 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:14:39 compute-0 podman[245020]: 2026-02-01 15:14:39.961265677 +0000 UTC m=+0.151112029 container init 3b600fc83ec7d3275d4c67d726405b8ed0cd91066931b79c64e87f25519d899e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 01 15:14:39 compute-0 podman[245020]: 2026-02-01 15:14:39.971521035 +0000 UTC m=+0.161367287 container start 3b600fc83ec7d3275d4c67d726405b8ed0cd91066931b79c64e87f25519d899e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sutherland, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 01 15:14:39 compute-0 podman[245020]: 2026-02-01 15:14:39.975146287 +0000 UTC m=+0.164992629 container attach 3b600fc83ec7d3275d4c67d726405b8ed0cd91066931b79c64e87f25519d899e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:14:39 compute-0 romantic_sutherland[245036]: 167 167
Feb 01 15:14:39 compute-0 systemd[1]: libpod-3b600fc83ec7d3275d4c67d726405b8ed0cd91066931b79c64e87f25519d899e.scope: Deactivated successfully.
Feb 01 15:14:39 compute-0 conmon[245036]: conmon 3b600fc83ec7d3275d4c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3b600fc83ec7d3275d4c67d726405b8ed0cd91066931b79c64e87f25519d899e.scope/container/memory.events
Feb 01 15:14:39 compute-0 podman[245020]: 2026-02-01 15:14:39.981138375 +0000 UTC m=+0.170984647 container died 3b600fc83ec7d3275d4c67d726405b8ed0cd91066931b79c64e87f25519d899e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sutherland, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:14:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-92bf5b88c44e5021338ba854617e4bfb761e947424d0d44f338a53c219021b39-merged.mount: Deactivated successfully.
Feb 01 15:14:40 compute-0 podman[245020]: 2026-02-01 15:14:40.021771925 +0000 UTC m=+0.211618177 container remove 3b600fc83ec7d3275d4c67d726405b8ed0cd91066931b79c64e87f25519d899e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:14:40 compute-0 systemd[1]: libpod-conmon-3b600fc83ec7d3275d4c67d726405b8ed0cd91066931b79c64e87f25519d899e.scope: Deactivated successfully.
Feb 01 15:14:40 compute-0 podman[245060]: 2026-02-01 15:14:40.1731258 +0000 UTC m=+0.042415091 container create 6987691bbcaf5e1a7b6840588aeafb236eefd79069bf5df689f077b1b169f4ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_jemison, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 01 15:14:40 compute-0 systemd[1]: Started libpod-conmon-6987691bbcaf5e1a7b6840588aeafb236eefd79069bf5df689f077b1b169f4ef.scope.
Feb 01 15:14:40 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0ffd746b7bd4ed8492cbd07590bd8fb3e581277fe83a20f3fbdbf2db326ea3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0ffd746b7bd4ed8492cbd07590bd8fb3e581277fe83a20f3fbdbf2db326ea3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0ffd746b7bd4ed8492cbd07590bd8fb3e581277fe83a20f3fbdbf2db326ea3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0ffd746b7bd4ed8492cbd07590bd8fb3e581277fe83a20f3fbdbf2db326ea3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0ffd746b7bd4ed8492cbd07590bd8fb3e581277fe83a20f3fbdbf2db326ea3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 15:14:40 compute-0 podman[245060]: 2026-02-01 15:14:40.154533729 +0000 UTC m=+0.023823050 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:14:40 compute-0 podman[245060]: 2026-02-01 15:14:40.267356733 +0000 UTC m=+0.136646034 container init 6987691bbcaf5e1a7b6840588aeafb236eefd79069bf5df689f077b1b169f4ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:14:40 compute-0 podman[245060]: 2026-02-01 15:14:40.27471165 +0000 UTC m=+0.144000941 container start 6987691bbcaf5e1a7b6840588aeafb236eefd79069bf5df689f077b1b169f4ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_jemison, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:14:40 compute-0 podman[245060]: 2026-02-01 15:14:40.27827564 +0000 UTC m=+0.147565021 container attach 6987691bbcaf5e1a7b6840588aeafb236eefd79069bf5df689f077b1b169f4ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:14:40 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 223 B/s rd, 17 KiB/s wr, 2 op/s
Feb 01 15:14:40 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "234ef068-f24e-4b9f-8f83-1f4a01701b53", "format": "json"}]: dispatch
Feb 01 15:14:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:234ef068-f24e-4b9f-8f83-1f4a01701b53, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:14:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:234ef068-f24e-4b9f-8f83-1f4a01701b53, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:14:40 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '234ef068-f24e-4b9f-8f83-1f4a01701b53' of type subvolume
Feb 01 15:14:40 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:14:40.395+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '234ef068-f24e-4b9f-8f83-1f4a01701b53' of type subvolume
Feb 01 15:14:40 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "234ef068-f24e-4b9f-8f83-1f4a01701b53", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:234ef068-f24e-4b9f-8f83-1f4a01701b53, vol_name:cephfs) < ""
Feb 01 15:14:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/234ef068-f24e-4b9f-8f83-1f4a01701b53'' moved to trashcan
Feb 01 15:14:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:14:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:234ef068-f24e-4b9f-8f83-1f4a01701b53, vol_name:cephfs) < ""
Feb 01 15:14:40 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:14:40 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:14:40 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:14:40 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:14:40 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:14:40 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:14:40 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:14:40 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "format": "json"}]: dispatch
Feb 01 15:14:40 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:14:40 compute-0 upbeat_jemison[245077]: --> passed data devices: 0 physical, 3 LVM
Feb 01 15:14:40 compute-0 upbeat_jemison[245077]: --> All data devices are unavailable
Feb 01 15:14:40 compute-0 systemd[1]: libpod-6987691bbcaf5e1a7b6840588aeafb236eefd79069bf5df689f077b1b169f4ef.scope: Deactivated successfully.
Feb 01 15:14:40 compute-0 podman[245097]: 2026-02-01 15:14:40.780400473 +0000 UTC m=+0.037305357 container died 6987691bbcaf5e1a7b6840588aeafb236eefd79069bf5df689f077b1b169f4ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_jemison, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 01 15:14:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b0ffd746b7bd4ed8492cbd07590bd8fb3e581277fe83a20f3fbdbf2db326ea3-merged.mount: Deactivated successfully.
Feb 01 15:14:40 compute-0 podman[245097]: 2026-02-01 15:14:40.82875321 +0000 UTC m=+0.085658044 container remove 6987691bbcaf5e1a7b6840588aeafb236eefd79069bf5df689f077b1b169f4ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 01 15:14:40 compute-0 systemd[1]: libpod-conmon-6987691bbcaf5e1a7b6840588aeafb236eefd79069bf5df689f077b1b169f4ef.scope: Deactivated successfully.
Feb 01 15:14:40 compute-0 sudo[244983]: pam_unix(sudo:session): session closed for user root
Feb 01 15:14:40 compute-0 sudo[245112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:14:40 compute-0 sudo[245112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:14:40 compute-0 sudo[245112]: pam_unix(sudo:session): session closed for user root
Feb 01 15:14:41 compute-0 sudo[245137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 15:14:41 compute-0 sudo[245137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:14:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:14:41 compute-0 podman[245173]: 2026-02-01 15:14:41.291715036 +0000 UTC m=+0.055438736 container create e2abee34b24f8dd79e5c2327297eb8ab615dd90f2d200ad4bf186f5ca75fb80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_gagarin, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:14:41 compute-0 systemd[1]: Started libpod-conmon-e2abee34b24f8dd79e5c2327297eb8ab615dd90f2d200ad4bf186f5ca75fb80b.scope.
Feb 01 15:14:41 compute-0 podman[245173]: 2026-02-01 15:14:41.264843962 +0000 UTC m=+0.028567732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:14:41 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:14:41 compute-0 podman[245173]: 2026-02-01 15:14:41.388912642 +0000 UTC m=+0.152636412 container init e2abee34b24f8dd79e5c2327297eb8ab615dd90f2d200ad4bf186f5ca75fb80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_gagarin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:14:41 compute-0 podman[245173]: 2026-02-01 15:14:41.393753608 +0000 UTC m=+0.157477338 container start e2abee34b24f8dd79e5c2327297eb8ab615dd90f2d200ad4bf186f5ca75fb80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_gagarin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:14:41 compute-0 blissful_gagarin[245189]: 167 167
Feb 01 15:14:41 compute-0 systemd[1]: libpod-e2abee34b24f8dd79e5c2327297eb8ab615dd90f2d200ad4bf186f5ca75fb80b.scope: Deactivated successfully.
Feb 01 15:14:41 compute-0 podman[245173]: 2026-02-01 15:14:41.401779663 +0000 UTC m=+0.165503473 container attach e2abee34b24f8dd79e5c2327297eb8ab615dd90f2d200ad4bf186f5ca75fb80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_gagarin, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:14:41 compute-0 podman[245173]: 2026-02-01 15:14:41.402178214 +0000 UTC m=+0.165901934 container died e2abee34b24f8dd79e5c2327297eb8ab615dd90f2d200ad4bf186f5ca75fb80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_gagarin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 01 15:14:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-71ceee7451b548ea3e0750eced4d5937c55a1c1fb6499f75875edc7ea3b0cf3b-merged.mount: Deactivated successfully.
Feb 01 15:14:41 compute-0 ceph-mon[75179]: pgmap v863: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 223 B/s rd, 17 KiB/s wr, 2 op/s
Feb 01 15:14:41 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "234ef068-f24e-4b9f-8f83-1f4a01701b53", "format": "json"}]: dispatch
Feb 01 15:14:41 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "234ef068-f24e-4b9f-8f83-1f4a01701b53", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:41 compute-0 podman[245173]: 2026-02-01 15:14:41.4975607 +0000 UTC m=+0.261284420 container remove e2abee34b24f8dd79e5c2327297eb8ab615dd90f2d200ad4bf186f5ca75fb80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_gagarin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:14:41 compute-0 systemd[1]: libpod-conmon-e2abee34b24f8dd79e5c2327297eb8ab615dd90f2d200ad4bf186f5ca75fb80b.scope: Deactivated successfully.
Feb 01 15:14:41 compute-0 podman[245215]: 2026-02-01 15:14:41.683827035 +0000 UTC m=+0.056222358 container create f03bb3d5d6b149e1cb8334f1474d7dd3a8eac48b72a3442892801f66a5226326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_booth, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:14:41 compute-0 systemd[1]: Started libpod-conmon-f03bb3d5d6b149e1cb8334f1474d7dd3a8eac48b72a3442892801f66a5226326.scope.
Feb 01 15:14:41 compute-0 podman[245215]: 2026-02-01 15:14:41.658377691 +0000 UTC m=+0.030773094 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:14:41 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eced9f8edc02ccaebb96b49eed91ba55bee829f66de04f5f90d127a43372f84c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eced9f8edc02ccaebb96b49eed91ba55bee829f66de04f5f90d127a43372f84c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eced9f8edc02ccaebb96b49eed91ba55bee829f66de04f5f90d127a43372f84c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eced9f8edc02ccaebb96b49eed91ba55bee829f66de04f5f90d127a43372f84c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:14:41 compute-0 podman[245215]: 2026-02-01 15:14:41.789257292 +0000 UTC m=+0.161652635 container init f03bb3d5d6b149e1cb8334f1474d7dd3a8eac48b72a3442892801f66a5226326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_booth, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 01 15:14:41 compute-0 podman[245215]: 2026-02-01 15:14:41.795745254 +0000 UTC m=+0.168140577 container start f03bb3d5d6b149e1cb8334f1474d7dd3a8eac48b72a3442892801f66a5226326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_booth, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:14:41 compute-0 podman[245215]: 2026-02-01 15:14:41.817530515 +0000 UTC m=+0.189925868 container attach f03bb3d5d6b149e1cb8334f1474d7dd3a8eac48b72a3442892801f66a5226326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_booth, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:14:42 compute-0 competent_booth[245232]: {
Feb 01 15:14:42 compute-0 competent_booth[245232]:     "0": [
Feb 01 15:14:42 compute-0 competent_booth[245232]:         {
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "devices": [
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "/dev/loop3"
Feb 01 15:14:42 compute-0 competent_booth[245232]:             ],
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "lv_name": "ceph_lv0",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "lv_size": "21470642176",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "name": "ceph_lv0",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "tags": {
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.cluster_name": "ceph",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.crush_device_class": "",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.encrypted": "0",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.objectstore": "bluestore",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.osd_id": "0",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.type": "block",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.vdo": "0",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.with_tpm": "0"
Feb 01 15:14:42 compute-0 competent_booth[245232]:             },
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "type": "block",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "vg_name": "ceph_vg0"
Feb 01 15:14:42 compute-0 competent_booth[245232]:         }
Feb 01 15:14:42 compute-0 competent_booth[245232]:     ],
Feb 01 15:14:42 compute-0 competent_booth[245232]:     "1": [
Feb 01 15:14:42 compute-0 competent_booth[245232]:         {
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "devices": [
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "/dev/loop4"
Feb 01 15:14:42 compute-0 competent_booth[245232]:             ],
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "lv_name": "ceph_lv1",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "lv_size": "21470642176",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "name": "ceph_lv1",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "tags": {
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.cluster_name": "ceph",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.crush_device_class": "",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.encrypted": "0",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.objectstore": "bluestore",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.osd_id": "1",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.type": "block",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.vdo": "0",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.with_tpm": "0"
Feb 01 15:14:42 compute-0 competent_booth[245232]:             },
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "type": "block",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "vg_name": "ceph_vg1"
Feb 01 15:14:42 compute-0 competent_booth[245232]:         }
Feb 01 15:14:42 compute-0 competent_booth[245232]:     ],
Feb 01 15:14:42 compute-0 competent_booth[245232]:     "2": [
Feb 01 15:14:42 compute-0 competent_booth[245232]:         {
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "devices": [
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "/dev/loop5"
Feb 01 15:14:42 compute-0 competent_booth[245232]:             ],
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "lv_name": "ceph_lv2",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "lv_size": "21470642176",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "name": "ceph_lv2",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "tags": {
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.cluster_name": "ceph",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.crush_device_class": "",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.encrypted": "0",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.objectstore": "bluestore",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.osd_id": "2",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.type": "block",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.vdo": "0",
Feb 01 15:14:42 compute-0 competent_booth[245232]:                 "ceph.with_tpm": "0"
Feb 01 15:14:42 compute-0 competent_booth[245232]:             },
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "type": "block",
Feb 01 15:14:42 compute-0 competent_booth[245232]:             "vg_name": "ceph_vg2"
Feb 01 15:14:42 compute-0 competent_booth[245232]:         }
Feb 01 15:14:42 compute-0 competent_booth[245232]:     ]
Feb 01 15:14:42 compute-0 competent_booth[245232]: }
Feb 01 15:14:42 compute-0 systemd[1]: libpod-f03bb3d5d6b149e1cb8334f1474d7dd3a8eac48b72a3442892801f66a5226326.scope: Deactivated successfully.
Feb 01 15:14:42 compute-0 podman[245215]: 2026-02-01 15:14:42.072932199 +0000 UTC m=+0.445327522 container died f03bb3d5d6b149e1cb8334f1474d7dd3a8eac48b72a3442892801f66a5226326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-eced9f8edc02ccaebb96b49eed91ba55bee829f66de04f5f90d127a43372f84c-merged.mount: Deactivated successfully.
Feb 01 15:14:42 compute-0 podman[245215]: 2026-02-01 15:14:42.11752356 +0000 UTC m=+0.489918873 container remove f03bb3d5d6b149e1cb8334f1474d7dd3a8eac48b72a3442892801f66a5226326 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Feb 01 15:14:42 compute-0 systemd[1]: libpod-conmon-f03bb3d5d6b149e1cb8334f1474d7dd3a8eac48b72a3442892801f66a5226326.scope: Deactivated successfully.
Feb 01 15:14:42 compute-0 sudo[245137]: pam_unix(sudo:session): session closed for user root
Feb 01 15:14:42 compute-0 sudo[245253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:14:42 compute-0 sudo[245253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:14:42 compute-0 sudo[245253]: pam_unix(sudo:session): session closed for user root
Feb 01 15:14:42 compute-0 sudo[245278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 15:14:42 compute-0 sudo[245278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:14:42 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 36 KiB/s wr, 4 op/s
Feb 01 15:14:42 compute-0 podman[245315]: 2026-02-01 15:14:42.557014708 +0000 UTC m=+0.037108962 container create 19484a69626ccff9da52aef7da2802d7ff27c9aebd9375f872331a330f3911e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:14:42 compute-0 systemd[1]: Started libpod-conmon-19484a69626ccff9da52aef7da2802d7ff27c9aebd9375f872331a330f3911e7.scope.
Feb 01 15:14:42 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:14:42 compute-0 podman[245315]: 2026-02-01 15:14:42.609710686 +0000 UTC m=+0.089804950 container init 19484a69626ccff9da52aef7da2802d7ff27c9aebd9375f872331a330f3911e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jackson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:14:42 compute-0 podman[245315]: 2026-02-01 15:14:42.615042536 +0000 UTC m=+0.095136780 container start 19484a69626ccff9da52aef7da2802d7ff27c9aebd9375f872331a330f3911e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True)
Feb 01 15:14:42 compute-0 competent_jackson[245329]: 167 167
Feb 01 15:14:42 compute-0 podman[245315]: 2026-02-01 15:14:42.618978556 +0000 UTC m=+0.099072800 container attach 19484a69626ccff9da52aef7da2802d7ff27c9aebd9375f872331a330f3911e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:14:42 compute-0 systemd[1]: libpod-19484a69626ccff9da52aef7da2802d7ff27c9aebd9375f872331a330f3911e7.scope: Deactivated successfully.
Feb 01 15:14:42 compute-0 podman[245315]: 2026-02-01 15:14:42.619505221 +0000 UTC m=+0.099599455 container died 19484a69626ccff9da52aef7da2802d7ff27c9aebd9375f872331a330f3911e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 01 15:14:42 compute-0 podman[245315]: 2026-02-01 15:14:42.540570947 +0000 UTC m=+0.020665211 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6dd07e8552a2dcf11f08c7d8d7d5a4dace741f909a7f990bd3b58ca05316b3f-merged.mount: Deactivated successfully.
Feb 01 15:14:42 compute-0 podman[245315]: 2026-02-01 15:14:42.655627474 +0000 UTC m=+0.135721728 container remove 19484a69626ccff9da52aef7da2802d7ff27c9aebd9375f872331a330f3911e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 01 15:14:42 compute-0 systemd[1]: libpod-conmon-19484a69626ccff9da52aef7da2802d7ff27c9aebd9375f872331a330f3911e7.scope: Deactivated successfully.
Feb 01 15:14:42 compute-0 podman[245354]: 2026-02-01 15:14:42.807082112 +0000 UTC m=+0.056873486 container create 9bbb4375213bd9983b7b56e683d461b0f08480c2e472a28ed25d24f13d54b8f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:14:42 compute-0 systemd[1]: Started libpod-conmon-9bbb4375213bd9983b7b56e683d461b0f08480c2e472a28ed25d24f13d54b8f0.scope.
Feb 01 15:14:42 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ecf6dd1940c15f5623ba0ed091b5a15aabe1848cc45dcad542ef5d12960d7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ecf6dd1940c15f5623ba0ed091b5a15aabe1848cc45dcad542ef5d12960d7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ecf6dd1940c15f5623ba0ed091b5a15aabe1848cc45dcad542ef5d12960d7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ecf6dd1940c15f5623ba0ed091b5a15aabe1848cc45dcad542ef5d12960d7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:14:42 compute-0 podman[245354]: 2026-02-01 15:14:42.783035418 +0000 UTC m=+0.032826912 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:14:42 compute-0 podman[245354]: 2026-02-01 15:14:42.893421064 +0000 UTC m=+0.143212528 container init 9bbb4375213bd9983b7b56e683d461b0f08480c2e472a28ed25d24f13d54b8f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 01 15:14:42 compute-0 podman[245354]: 2026-02-01 15:14:42.90610579 +0000 UTC m=+0.155897204 container start 9bbb4375213bd9983b7b56e683d461b0f08480c2e472a28ed25d24f13d54b8f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:14:42 compute-0 podman[245354]: 2026-02-01 15:14:42.910918505 +0000 UTC m=+0.160709919 container attach 9bbb4375213bd9983b7b56e683d461b0f08480c2e472a28ed25d24f13d54b8f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:14:43 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "snap_name": "00151c93-f474-433b-9073-c4743a80f8a9", "format": "json"}]: dispatch
Feb 01 15:14:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:00151c93-f474-433b-9073-c4743a80f8a9, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb 01 15:14:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:00151c93-f474-433b-9073-c4743a80f8a9, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb 01 15:14:43 compute-0 ceph-mon[75179]: pgmap v864: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 36 KiB/s wr, 4 op/s
Feb 01 15:14:43 compute-0 lvm[245448]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:14:43 compute-0 lvm[245450]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:14:43 compute-0 lvm[245450]: VG ceph_vg1 finished
Feb 01 15:14:43 compute-0 lvm[245448]: VG ceph_vg0 finished
Feb 01 15:14:43 compute-0 lvm[245452]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:14:43 compute-0 lvm[245452]: VG ceph_vg2 finished
Feb 01 15:14:43 compute-0 agitated_joliot[245371]: {}
Feb 01 15:14:43 compute-0 systemd[1]: libpod-9bbb4375213bd9983b7b56e683d461b0f08480c2e472a28ed25d24f13d54b8f0.scope: Deactivated successfully.
Feb 01 15:14:43 compute-0 systemd[1]: libpod-9bbb4375213bd9983b7b56e683d461b0f08480c2e472a28ed25d24f13d54b8f0.scope: Consumed 1.053s CPU time.
Feb 01 15:14:43 compute-0 podman[245354]: 2026-02-01 15:14:43.642099535 +0000 UTC m=+0.891890909 container died 9bbb4375213bd9983b7b56e683d461b0f08480c2e472a28ed25d24f13d54b8f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:14:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7ecf6dd1940c15f5623ba0ed091b5a15aabe1848cc45dcad542ef5d12960d7f-merged.mount: Deactivated successfully.
Feb 01 15:14:43 compute-0 podman[245354]: 2026-02-01 15:14:43.674586956 +0000 UTC m=+0.924378330 container remove 9bbb4375213bd9983b7b56e683d461b0f08480c2e472a28ed25d24f13d54b8f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 01 15:14:43 compute-0 systemd[1]: libpod-conmon-9bbb4375213bd9983b7b56e683d461b0f08480c2e472a28ed25d24f13d54b8f0.scope: Deactivated successfully.
Feb 01 15:14:43 compute-0 sudo[245278]: pam_unix(sudo:session): session closed for user root
Feb 01 15:14:43 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:14:43 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:14:43 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:14:43 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:14:43 compute-0 sudo[245465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 15:14:43 compute-0 sudo[245465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:14:43 compute-0 sudo[245465]: pam_unix(sudo:session): session closed for user root
Feb 01 15:14:43 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "986a6f65-322a-44eb-81bc-bb6e9d6f221a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:14:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:986a6f65-322a-44eb-81bc-bb6e9d6f221a, vol_name:cephfs) < ""
Feb 01 15:14:43 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/986a6f65-322a-44eb-81bc-bb6e9d6f221a/8a0117e1-bdfc-47e2-9388-b50bb03f2da5'.
Feb 01 15:14:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/986a6f65-322a-44eb-81bc-bb6e9d6f221a/.meta.tmp'
Feb 01 15:14:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/986a6f65-322a-44eb-81bc-bb6e9d6f221a/.meta.tmp' to config b'/volumes/_nogroup/986a6f65-322a-44eb-81bc-bb6e9d6f221a/.meta'
Feb 01 15:14:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:986a6f65-322a-44eb-81bc-bb6e9d6f221a, vol_name:cephfs) < ""
Feb 01 15:14:43 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "986a6f65-322a-44eb-81bc-bb6e9d6f221a", "format": "json"}]: dispatch
Feb 01 15:14:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:986a6f65-322a-44eb-81bc-bb6e9d6f221a, vol_name:cephfs) < ""
Feb 01 15:14:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:986a6f65-322a-44eb-81bc-bb6e9d6f221a, vol_name:cephfs) < ""
Feb 01 15:14:43 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:14:43 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:14:44 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 36 KiB/s wr, 4 op/s
Feb 01 15:14:44 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "snap_name": "00151c93-f474-433b-9073-c4743a80f8a9", "format": "json"}]: dispatch
Feb 01 15:14:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:14:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:14:44 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "986a6f65-322a-44eb-81bc-bb6e9d6f221a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:14:44 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "986a6f65-322a-44eb-81bc-bb6e9d6f221a", "format": "json"}]: dispatch
Feb 01 15:14:44 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:14:45 compute-0 ceph-mon[75179]: pgmap v865: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 36 KiB/s wr, 4 op/s
Feb 01 15:14:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:14:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Feb 01 15:14:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Feb 01 15:14:46 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Feb 01 15:14:46 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 37 KiB/s wr, 5 op/s
Feb 01 15:14:46 compute-0 nova_compute[238794]: 2026-02-01 15:14:46.599 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:14:46 compute-0 nova_compute[238794]: 2026-02-01 15:14:46.600 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:14:46 compute-0 nova_compute[238794]: 2026-02-01 15:14:46.601 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 01 15:14:46 compute-0 nova_compute[238794]: 2026-02-01 15:14:46.601 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 01 15:14:46 compute-0 nova_compute[238794]: 2026-02-01 15:14:46.618 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 01 15:14:47 compute-0 ceph-mon[75179]: osdmap e142: 3 total, 3 up, 3 in
Feb 01 15:14:47 compute-0 ceph-mon[75179]: pgmap v867: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 37 KiB/s wr, 5 op/s
Feb 01 15:14:47 compute-0 nova_compute[238794]: 2026-02-01 15:14:47.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:14:47 compute-0 nova_compute[238794]: 2026-02-01 15:14:47.340 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:14:47 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a0be7a54-7b29-45e1-9605-eb7321d359f2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:14:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a0be7a54-7b29-45e1-9605-eb7321d359f2, vol_name:cephfs) < ""
Feb 01 15:14:47 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/a0be7a54-7b29-45e1-9605-eb7321d359f2/bf7a0fa6-935f-438c-a9e0-4f04fe55824e'.
Feb 01 15:14:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a0be7a54-7b29-45e1-9605-eb7321d359f2/.meta.tmp'
Feb 01 15:14:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a0be7a54-7b29-45e1-9605-eb7321d359f2/.meta.tmp' to config b'/volumes/_nogroup/a0be7a54-7b29-45e1-9605-eb7321d359f2/.meta'
Feb 01 15:14:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a0be7a54-7b29-45e1-9605-eb7321d359f2, vol_name:cephfs) < ""
Feb 01 15:14:47 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a0be7a54-7b29-45e1-9605-eb7321d359f2", "format": "json"}]: dispatch
Feb 01 15:14:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a0be7a54-7b29-45e1-9605-eb7321d359f2, vol_name:cephfs) < ""
Feb 01 15:14:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a0be7a54-7b29-45e1-9605-eb7321d359f2, vol_name:cephfs) < ""
Feb 01 15:14:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:14:47 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:14:47 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "986a6f65-322a-44eb-81bc-bb6e9d6f221a", "format": "json"}]: dispatch
Feb 01 15:14:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:986a6f65-322a-44eb-81bc-bb6e9d6f221a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:14:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:986a6f65-322a-44eb-81bc-bb6e9d6f221a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:14:47 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '986a6f65-322a-44eb-81bc-bb6e9d6f221a' of type subvolume
Feb 01 15:14:47 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:14:47.716+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '986a6f65-322a-44eb-81bc-bb6e9d6f221a' of type subvolume
Feb 01 15:14:47 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "986a6f65-322a-44eb-81bc-bb6e9d6f221a", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:986a6f65-322a-44eb-81bc-bb6e9d6f221a, vol_name:cephfs) < ""
Feb 01 15:14:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/986a6f65-322a-44eb-81bc-bb6e9d6f221a'' moved to trashcan
Feb 01 15:14:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:14:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:986a6f65-322a-44eb-81bc-bb6e9d6f221a, vol_name:cephfs) < ""
Feb 01 15:14:47 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "snap_name": "e2132102-39e4-41f4-a6d3-e7a2a8df27cc", "format": "json"}]: dispatch
Feb 01 15:14:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e2132102-39e4-41f4-a6d3-e7a2a8df27cc, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb 01 15:14:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e2132102-39e4-41f4-a6d3-e7a2a8df27cc, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb 01 15:14:48 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a0be7a54-7b29-45e1-9605-eb7321d359f2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:14:48 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a0be7a54-7b29-45e1-9605-eb7321d359f2", "format": "json"}]: dispatch
Feb 01 15:14:48 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:14:48 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "986a6f65-322a-44eb-81bc-bb6e9d6f221a", "format": "json"}]: dispatch
Feb 01 15:14:48 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "986a6f65-322a-44eb-81bc-bb6e9d6f221a", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:48 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "snap_name": "e2132102-39e4-41f4-a6d3-e7a2a8df27cc", "format": "json"}]: dispatch
Feb 01 15:14:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 37 KiB/s wr, 5 op/s
Feb 01 15:14:48 compute-0 nova_compute[238794]: 2026-02-01 15:14:48.318 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:14:48 compute-0 nova_compute[238794]: 2026-02-01 15:14:48.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:14:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:14:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:14:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:14:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:14:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:14:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.808917) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958889808949, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2290, "num_deletes": 257, "total_data_size": 3555922, "memory_usage": 3605792, "flush_reason": "Manual Compaction"}
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958889819067, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3484761, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16344, "largest_seqno": 18633, "table_properties": {"data_size": 3474162, "index_size": 6709, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23468, "raw_average_key_size": 21, "raw_value_size": 3452205, "raw_average_value_size": 3101, "num_data_blocks": 298, "num_entries": 1113, "num_filter_entries": 1113, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769958707, "oldest_key_time": 1769958707, "file_creation_time": 1769958889, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 10183 microseconds, and 4788 cpu microseconds.
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.819105) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3484761 bytes OK
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.819120) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.820529) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.820544) EVENT_LOG_v1 {"time_micros": 1769958889820540, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.820560) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3545923, prev total WAL file size 3545923, number of live WAL files 2.
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.821112) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3403KB)], [38(7673KB)]
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958889821227, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11341977, "oldest_snapshot_seqno": -1}
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4600 keys, 9556428 bytes, temperature: kUnknown
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958889879463, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9556428, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9521941, "index_size": 21897, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11525, "raw_key_size": 111613, "raw_average_key_size": 24, "raw_value_size": 9435259, "raw_average_value_size": 2051, "num_data_blocks": 926, "num_entries": 4600, "num_filter_entries": 4600, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769958889, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:14:49 compute-0 ceph-mon[75179]: pgmap v868: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 37 KiB/s wr, 5 op/s
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.879707) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9556428 bytes
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.883283) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.5 rd, 163.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.5 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 5130, records dropped: 530 output_compression: NoCompression
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.883321) EVENT_LOG_v1 {"time_micros": 1769958889883310, "job": 18, "event": "compaction_finished", "compaction_time_micros": 58302, "compaction_time_cpu_micros": 27529, "output_level": 6, "num_output_files": 1, "total_output_size": 9556428, "num_input_records": 5130, "num_output_records": 4600, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958889883805, "job": 18, "event": "table_file_deletion", "file_number": 40}
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958889884842, "job": 18, "event": "table_file_deletion", "file_number": 38}
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.820951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.884887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.884891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.884893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.884894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:14:49 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:14:49.884896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:14:50 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 37 KiB/s wr, 5 op/s
Feb 01 15:14:50 compute-0 nova_compute[238794]: 2026-02-01 15:14:50.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:14:50 compute-0 nova_compute[238794]: 2026-02-01 15:14:50.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:14:50 compute-0 nova_compute[238794]: 2026-02-01 15:14:50.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 01 15:14:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 01 15:14:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3642991910' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:14:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 01 15:14:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3642991910' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:14:51 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bc96901b-a655-4999-93d2-e6667ec9f6a9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:14:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bc96901b-a655-4999-93d2-e6667ec9f6a9, vol_name:cephfs) < ""
Feb 01 15:14:51 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/bc96901b-a655-4999-93d2-e6667ec9f6a9/c7ded93a-afbe-41b1-ad33-9bd7a71748e6'.
Feb 01 15:14:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bc96901b-a655-4999-93d2-e6667ec9f6a9/.meta.tmp'
Feb 01 15:14:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bc96901b-a655-4999-93d2-e6667ec9f6a9/.meta.tmp' to config b'/volumes/_nogroup/bc96901b-a655-4999-93d2-e6667ec9f6a9/.meta'
Feb 01 15:14:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bc96901b-a655-4999-93d2-e6667ec9f6a9, vol_name:cephfs) < ""
Feb 01 15:14:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:14:51 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bc96901b-a655-4999-93d2-e6667ec9f6a9", "format": "json"}]: dispatch
Feb 01 15:14:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bc96901b-a655-4999-93d2-e6667ec9f6a9, vol_name:cephfs) < ""
Feb 01 15:14:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bc96901b-a655-4999-93d2-e6667ec9f6a9, vol_name:cephfs) < ""
Feb 01 15:14:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:14:51 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:14:51 compute-0 nova_compute[238794]: 2026-02-01 15:14:51.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:14:51 compute-0 ceph-mon[75179]: pgmap v869: 305 pgs: 305 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 37 KiB/s wr, 5 op/s
Feb 01 15:14:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3642991910' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:14:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3642991910' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:14:51 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bc96901b-a655-4999-93d2-e6667ec9f6a9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:14:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:14:52 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 35 KiB/s wr, 5 op/s
Feb 01 15:14:52 compute-0 nova_compute[238794]: 2026-02-01 15:14:52.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:14:52 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "snap_name": "e2132102-39e4-41f4-a6d3-e7a2a8df27cc_1ed659b6-e30b-4f53-ae01-83823d19486c", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:52 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e2132102-39e4-41f4-a6d3-e7a2a8df27cc_1ed659b6-e30b-4f53-ae01-83823d19486c, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb 01 15:14:52 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta.tmp'
Feb 01 15:14:52 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta.tmp' to config b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta'
Feb 01 15:14:52 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e2132102-39e4-41f4-a6d3-e7a2a8df27cc_1ed659b6-e30b-4f53-ae01-83823d19486c, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb 01 15:14:52 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "snap_name": "e2132102-39e4-41f4-a6d3-e7a2a8df27cc", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:52 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e2132102-39e4-41f4-a6d3-e7a2a8df27cc, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb 01 15:14:52 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta.tmp'
Feb 01 15:14:52 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta.tmp' to config b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta'
Feb 01 15:14:52 compute-0 nova_compute[238794]: 2026-02-01 15:14:52.350 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:14:52 compute-0 nova_compute[238794]: 2026-02-01 15:14:52.351 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:14:52 compute-0 nova_compute[238794]: 2026-02-01 15:14:52.351 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:14:52 compute-0 nova_compute[238794]: 2026-02-01 15:14:52.351 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 01 15:14:52 compute-0 nova_compute[238794]: 2026-02-01 15:14:52.351 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:14:52 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e2132102-39e4-41f4-a6d3-e7a2a8df27cc, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb 01 15:14:52 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:14:52 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/190860762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:14:52 compute-0 nova_compute[238794]: 2026-02-01 15:14:52.830 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:14:52 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bc96901b-a655-4999-93d2-e6667ec9f6a9", "format": "json"}]: dispatch
Feb 01 15:14:52 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/190860762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:14:52 compute-0 nova_compute[238794]: 2026-02-01 15:14:52.973 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 01 15:14:52 compute-0 nova_compute[238794]: 2026-02-01 15:14:52.974 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5101MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 01 15:14:52 compute-0 nova_compute[238794]: 2026-02-01 15:14:52.974 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:14:52 compute-0 nova_compute[238794]: 2026-02-01 15:14:52.974 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:14:53 compute-0 nova_compute[238794]: 2026-02-01 15:14:53.046 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 01 15:14:53 compute-0 nova_compute[238794]: 2026-02-01 15:14:53.046 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 01 15:14:53 compute-0 nova_compute[238794]: 2026-02-01 15:14:53.067 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:14:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:14:53 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/210678953' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:14:53 compute-0 nova_compute[238794]: 2026-02-01 15:14:53.531 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:14:53 compute-0 nova_compute[238794]: 2026-02-01 15:14:53.535 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 01 15:14:53 compute-0 nova_compute[238794]: 2026-02-01 15:14:53.555 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 01 15:14:53 compute-0 nova_compute[238794]: 2026-02-01 15:14:53.557 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 01 15:14:53 compute-0 nova_compute[238794]: 2026-02-01 15:14:53.558 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:14:53 compute-0 ceph-mon[75179]: pgmap v870: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 35 KiB/s wr, 5 op/s
Feb 01 15:14:53 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "snap_name": "e2132102-39e4-41f4-a6d3-e7a2a8df27cc_1ed659b6-e30b-4f53-ae01-83823d19486c", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:53 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "snap_name": "e2132102-39e4-41f4-a6d3-e7a2a8df27cc", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:53 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/210678953' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:14:54 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 35 KiB/s wr, 5 op/s
Feb 01 15:14:55 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "eb102253-6a1f-49e8-ab97-331a8e4964d4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:14:55 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:eb102253-6a1f-49e8-ab97-331a8e4964d4, vol_name:cephfs) < ""
Feb 01 15:14:55 compute-0 podman[245535]: 2026-02-01 15:14:55.96887009 +0000 UTC m=+0.060272802 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 01 15:14:55 compute-0 ceph-mon[75179]: pgmap v871: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 35 KiB/s wr, 5 op/s
Feb 01 15:14:55 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/eb102253-6a1f-49e8-ab97-331a8e4964d4/343d7908-3f6a-4ee6-ae99-98e6f37f0d79'.
Feb 01 15:14:55 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eb102253-6a1f-49e8-ab97-331a8e4964d4/.meta.tmp'
Feb 01 15:14:55 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eb102253-6a1f-49e8-ab97-331a8e4964d4/.meta.tmp' to config b'/volumes/_nogroup/eb102253-6a1f-49e8-ab97-331a8e4964d4/.meta'
Feb 01 15:14:55 compute-0 podman[245536]: 2026-02-01 15:14:55.991906446 +0000 UTC m=+0.083530934 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 01 15:14:55 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:eb102253-6a1f-49e8-ab97-331a8e4964d4, vol_name:cephfs) < ""
Feb 01 15:14:55 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "eb102253-6a1f-49e8-ab97-331a8e4964d4", "format": "json"}]: dispatch
Feb 01 15:14:55 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:eb102253-6a1f-49e8-ab97-331a8e4964d4, vol_name:cephfs) < ""
Feb 01 15:14:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:eb102253-6a1f-49e8-ab97-331a8e4964d4, vol_name:cephfs) < ""
Feb 01 15:14:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:14:56 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:14:56 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "snap_name": "00151c93-f474-433b-9073-c4743a80f8a9_7452b405-63e8-464b-8fbd-4019869a8486", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:00151c93-f474-433b-9073-c4743a80f8a9_7452b405-63e8-464b-8fbd-4019869a8486, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb 01 15:14:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta.tmp'
Feb 01 15:14:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta.tmp' to config b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta'
Feb 01 15:14:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:00151c93-f474-433b-9073-c4743a80f8a9_7452b405-63e8-464b-8fbd-4019869a8486, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb 01 15:14:56 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "snap_name": "00151c93-f474-433b-9073-c4743a80f8a9", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:00151c93-f474-433b-9073-c4743a80f8a9, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb 01 15:14:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta.tmp'
Feb 01 15:14:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta.tmp' to config b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df/.meta'
Feb 01 15:14:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:00151c93-f474-433b-9073-c4743a80f8a9, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb 01 15:14:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:14:56 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 402 B/s rd, 46 KiB/s wr, 6 op/s
Feb 01 15:14:56 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bc96901b-a655-4999-93d2-e6667ec9f6a9", "format": "json"}]: dispatch
Feb 01 15:14:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bc96901b-a655-4999-93d2-e6667ec9f6a9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:14:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bc96901b-a655-4999-93d2-e6667ec9f6a9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:14:56 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bc96901b-a655-4999-93d2-e6667ec9f6a9' of type subvolume
Feb 01 15:14:56 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:14:56.415+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bc96901b-a655-4999-93d2-e6667ec9f6a9' of type subvolume
Feb 01 15:14:56 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bc96901b-a655-4999-93d2-e6667ec9f6a9", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bc96901b-a655-4999-93d2-e6667ec9f6a9, vol_name:cephfs) < ""
Feb 01 15:14:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bc96901b-a655-4999-93d2-e6667ec9f6a9'' moved to trashcan
Feb 01 15:14:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:14:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bc96901b-a655-4999-93d2-e6667ec9f6a9, vol_name:cephfs) < ""
Feb 01 15:14:56 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "eb102253-6a1f-49e8-ab97-331a8e4964d4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:14:56 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "eb102253-6a1f-49e8-ab97-331a8e4964d4", "format": "json"}]: dispatch
Feb 01 15:14:56 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:14:56 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "snap_name": "00151c93-f474-433b-9073-c4743a80f8a9_7452b405-63e8-464b-8fbd-4019869a8486", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:56 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "snap_name": "00151c93-f474-433b-9073-c4743a80f8a9", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:57 compute-0 ceph-mon[75179]: pgmap v872: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 402 B/s rd, 46 KiB/s wr, 6 op/s
Feb 01 15:14:57 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bc96901b-a655-4999-93d2-e6667ec9f6a9", "format": "json"}]: dispatch
Feb 01 15:14:57 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bc96901b-a655-4999-93d2-e6667ec9f6a9", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:58 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 39 KiB/s wr, 5 op/s
Feb 01 15:14:59 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "format": "json"}]: dispatch
Feb 01 15:14:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:14:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:14:59 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '02a9afaa-78ab-4c60-9b65-efddd9ffb5df' of type subvolume
Feb 01 15:14:59 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:14:59.376+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '02a9afaa-78ab-4c60-9b65-efddd9ffb5df' of type subvolume
Feb 01 15:14:59 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "force": true, "format": "json"}]: dispatch
Feb 01 15:14:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb 01 15:14:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/02a9afaa-78ab-4c60-9b65-efddd9ffb5df'' moved to trashcan
Feb 01 15:14:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:14:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:02a9afaa-78ab-4c60-9b65-efddd9ffb5df, vol_name:cephfs) < ""
Feb 01 15:15:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Feb 01 15:15:00 compute-0 ceph-mon[75179]: pgmap v873: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 39 KiB/s wr, 5 op/s
Feb 01 15:15:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Feb 01 15:15:00 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Feb 01 15:15:00 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 47 KiB/s wr, 6 op/s
Feb 01 15:15:00 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "eb102253-6a1f-49e8-ab97-331a8e4964d4", "format": "json"}]: dispatch
Feb 01 15:15:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:eb102253-6a1f-49e8-ab97-331a8e4964d4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:15:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:eb102253-6a1f-49e8-ab97-331a8e4964d4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:15:00 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'eb102253-6a1f-49e8-ab97-331a8e4964d4' of type subvolume
Feb 01 15:15:00 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:00.822+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'eb102253-6a1f-49e8-ab97-331a8e4964d4' of type subvolume
Feb 01 15:15:00 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "eb102253-6a1f-49e8-ab97-331a8e4964d4", "force": true, "format": "json"}]: dispatch
Feb 01 15:15:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:eb102253-6a1f-49e8-ab97-331a8e4964d4, vol_name:cephfs) < ""
Feb 01 15:15:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/eb102253-6a1f-49e8-ab97-331a8e4964d4'' moved to trashcan
Feb 01 15:15:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:15:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:eb102253-6a1f-49e8-ab97-331a8e4964d4, vol_name:cephfs) < ""
Feb 01 15:15:01 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "format": "json"}]: dispatch
Feb 01 15:15:01 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "02a9afaa-78ab-4c60-9b65-efddd9ffb5df", "force": true, "format": "json"}]: dispatch
Feb 01 15:15:01 compute-0 ceph-mon[75179]: osdmap e143: 3 total, 3 up, 3 in
Feb 01 15:15:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:15:01 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ef425026-5828-4f43-8ed3-bad0eb8046b9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:15:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ef425026-5828-4f43-8ed3-bad0eb8046b9, vol_name:cephfs) < ""
Feb 01 15:15:01 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/ef425026-5828-4f43-8ed3-bad0eb8046b9/a4ef6e04-7742-4e57-b0b4-6785fe4b593f'.
Feb 01 15:15:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ef425026-5828-4f43-8ed3-bad0eb8046b9/.meta.tmp'
Feb 01 15:15:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ef425026-5828-4f43-8ed3-bad0eb8046b9/.meta.tmp' to config b'/volumes/_nogroup/ef425026-5828-4f43-8ed3-bad0eb8046b9/.meta'
Feb 01 15:15:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ef425026-5828-4f43-8ed3-bad0eb8046b9, vol_name:cephfs) < ""
Feb 01 15:15:01 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ef425026-5828-4f43-8ed3-bad0eb8046b9", "format": "json"}]: dispatch
Feb 01 15:15:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ef425026-5828-4f43-8ed3-bad0eb8046b9, vol_name:cephfs) < ""
Feb 01 15:15:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ef425026-5828-4f43-8ed3-bad0eb8046b9, vol_name:cephfs) < ""
Feb 01 15:15:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:15:01 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:15:02 compute-0 ceph-mon[75179]: pgmap v875: 305 pgs: 305 active+clean; 44 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 47 KiB/s wr, 6 op/s
Feb 01 15:15:02 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "eb102253-6a1f-49e8-ab97-331a8e4964d4", "format": "json"}]: dispatch
Feb 01 15:15:02 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "eb102253-6a1f-49e8-ab97-331a8e4964d4", "force": true, "format": "json"}]: dispatch
Feb 01 15:15:02 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:15:02 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 62 KiB/s wr, 8 op/s
Feb 01 15:15:03 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ef425026-5828-4f43-8ed3-bad0eb8046b9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:15:03 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ef425026-5828-4f43-8ed3-bad0eb8046b9", "format": "json"}]: dispatch
Feb 01 15:15:03 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a0be7a54-7b29-45e1-9605-eb7321d359f2", "format": "json"}]: dispatch
Feb 01 15:15:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a0be7a54-7b29-45e1-9605-eb7321d359f2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:15:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a0be7a54-7b29-45e1-9605-eb7321d359f2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:15:03 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:03.104+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a0be7a54-7b29-45e1-9605-eb7321d359f2' of type subvolume
Feb 01 15:15:03 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a0be7a54-7b29-45e1-9605-eb7321d359f2' of type subvolume
Feb 01 15:15:03 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a0be7a54-7b29-45e1-9605-eb7321d359f2", "force": true, "format": "json"}]: dispatch
Feb 01 15:15:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a0be7a54-7b29-45e1-9605-eb7321d359f2, vol_name:cephfs) < ""
Feb 01 15:15:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a0be7a54-7b29-45e1-9605-eb7321d359f2'' moved to trashcan
Feb 01 15:15:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:15:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a0be7a54-7b29-45e1-9605-eb7321d359f2, vol_name:cephfs) < ""
Feb 01 15:15:04 compute-0 ceph-mon[75179]: pgmap v876: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 62 KiB/s wr, 8 op/s
Feb 01 15:15:04 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a0be7a54-7b29-45e1-9605-eb7321d359f2", "format": "json"}]: dispatch
Feb 01 15:15:04 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 62 KiB/s wr, 8 op/s
Feb 01 15:15:05 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a0be7a54-7b29-45e1-9605-eb7321d359f2", "force": true, "format": "json"}]: dispatch
Feb 01 15:15:05 compute-0 ceph-mon[75179]: pgmap v877: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 62 KiB/s wr, 8 op/s
Feb 01 15:15:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:15:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Feb 01 15:15:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Feb 01 15:15:06 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Feb 01 15:15:06 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 90 KiB/s wr, 11 op/s
Feb 01 15:15:07 compute-0 ceph-mon[75179]: osdmap e144: 3 total, 3 up, 3 in
Feb 01 15:15:07 compute-0 ceph-mon[75179]: pgmap v879: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 90 KiB/s wr, 11 op/s
Feb 01 15:15:07 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ef425026-5828-4f43-8ed3-bad0eb8046b9", "format": "json"}]: dispatch
Feb 01 15:15:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ef425026-5828-4f43-8ed3-bad0eb8046b9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:15:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ef425026-5828-4f43-8ed3-bad0eb8046b9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:15:07 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ef425026-5828-4f43-8ed3-bad0eb8046b9' of type subvolume
Feb 01 15:15:07 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:07.506+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ef425026-5828-4f43-8ed3-bad0eb8046b9' of type subvolume
Feb 01 15:15:07 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ef425026-5828-4f43-8ed3-bad0eb8046b9", "force": true, "format": "json"}]: dispatch
Feb 01 15:15:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ef425026-5828-4f43-8ed3-bad0eb8046b9, vol_name:cephfs) < ""
Feb 01 15:15:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ef425026-5828-4f43-8ed3-bad0eb8046b9'' moved to trashcan
Feb 01 15:15:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:15:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ef425026-5828-4f43-8ed3-bad0eb8046b9, vol_name:cephfs) < ""
Feb 01 15:15:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:15:07.810 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:15:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:15:07.811 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:15:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:15:07.811 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:15:08 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ef425026-5828-4f43-8ed3-bad0eb8046b9", "format": "json"}]: dispatch
Feb 01 15:15:08 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ef425026-5828-4f43-8ed3-bad0eb8046b9", "force": true, "format": "json"}]: dispatch
Feb 01 15:15:08 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 87 KiB/s wr, 11 op/s
Feb 01 15:15:09 compute-0 ceph-mon[75179]: pgmap v880: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 87 KiB/s wr, 11 op/s
Feb 01 15:15:09 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7371576d-9b9d-4a2b-b2a0-dbf1c35daed8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:15:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb 01 15:15:09 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/7371576d-9b9d-4a2b-b2a0-dbf1c35daed8/d1d2aa30-3fda-423c-98e7-19123ab0f35e'.
Feb 01 15:15:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7371576d-9b9d-4a2b-b2a0-dbf1c35daed8/.meta.tmp'
Feb 01 15:15:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7371576d-9b9d-4a2b-b2a0-dbf1c35daed8/.meta.tmp' to config b'/volumes/_nogroup/7371576d-9b9d-4a2b-b2a0-dbf1c35daed8/.meta'
Feb 01 15:15:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb 01 15:15:09 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7371576d-9b9d-4a2b-b2a0-dbf1c35daed8", "format": "json"}]: dispatch
Feb 01 15:15:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb 01 15:15:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb 01 15:15:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:15:09 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:15:10 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7371576d-9b9d-4a2b-b2a0-dbf1c35daed8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:15:10 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7371576d-9b9d-4a2b-b2a0-dbf1c35daed8", "format": "json"}]: dispatch
Feb 01 15:15:10 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:15:10 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 72 KiB/s wr, 9 op/s
Feb 01 15:15:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:15:11 compute-0 ceph-mon[75179]: pgmap v881: 305 pgs: 305 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 72 KiB/s wr, 9 op/s
Feb 01 15:15:12 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 45 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 58 KiB/s wr, 7 op/s
Feb 01 15:15:12 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:15:12 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4, vol_name:cephfs) < ""
Feb 01 15:15:12 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4/a1249ef8-0fd1-4988-8b76-452c96f79331'.
Feb 01 15:15:12 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4/.meta.tmp'
Feb 01 15:15:12 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4/.meta.tmp' to config b'/volumes/_nogroup/5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4/.meta'
Feb 01 15:15:12 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4, vol_name:cephfs) < ""
Feb 01 15:15:12 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4", "format": "json"}]: dispatch
Feb 01 15:15:12 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4, vol_name:cephfs) < ""
Feb 01 15:15:12 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4, vol_name:cephfs) < ""
Feb 01 15:15:12 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:15:12 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:15:13 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7371576d-9b9d-4a2b-b2a0-dbf1c35daed8", "snap_name": "bac99b88-a326-4fdb-ac75-b388970b7d3b", "format": "json"}]: dispatch
Feb 01 15:15:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bac99b88-a326-4fdb-ac75-b388970b7d3b, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb 01 15:15:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bac99b88-a326-4fdb-ac75-b388970b7d3b, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb 01 15:15:13 compute-0 ceph-mon[75179]: pgmap v882: 305 pgs: 305 active+clean; 45 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 58 KiB/s wr, 7 op/s
Feb 01 15:15:13 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:15:13 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4", "format": "json"}]: dispatch
Feb 01 15:15:13 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:15:13 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "14f2cf47-b452-4ed6-a42d-a978bd461803", "format": "json"}]: dispatch
Feb 01 15:15:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:14f2cf47-b452-4ed6-a42d-a978bd461803, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:15:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:14f2cf47-b452-4ed6-a42d-a978bd461803, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:15:13 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '14f2cf47-b452-4ed6-a42d-a978bd461803' of type subvolume
Feb 01 15:15:13 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:13.537+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '14f2cf47-b452-4ed6-a42d-a978bd461803' of type subvolume
Feb 01 15:15:13 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "14f2cf47-b452-4ed6-a42d-a978bd461803", "force": true, "format": "json"}]: dispatch
Feb 01 15:15:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:14f2cf47-b452-4ed6-a42d-a978bd461803, vol_name:cephfs) < ""
Feb 01 15:15:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/14f2cf47-b452-4ed6-a42d-a978bd461803'' moved to trashcan
Feb 01 15:15:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:15:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:14f2cf47-b452-4ed6-a42d-a978bd461803, vol_name:cephfs) < ""
Feb 01 15:15:14 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 45 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 58 KiB/s wr, 7 op/s
Feb 01 15:15:14 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7371576d-9b9d-4a2b-b2a0-dbf1c35daed8", "snap_name": "bac99b88-a326-4fdb-ac75-b388970b7d3b", "format": "json"}]: dispatch
Feb 01 15:15:14 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "14f2cf47-b452-4ed6-a42d-a978bd461803", "format": "json"}]: dispatch
Feb 01 15:15:14 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "14f2cf47-b452-4ed6-a42d-a978bd461803", "force": true, "format": "json"}]: dispatch
Feb 01 15:15:15 compute-0 ceph-mon[75179]: pgmap v883: 305 pgs: 305 active+clean; 45 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 58 KiB/s wr, 7 op/s
Feb 01 15:15:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:15:16 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 402 B/s rd, 44 KiB/s wr, 5 op/s
Feb 01 15:15:16 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "876b1428-1377-472c-b335-dfa9653f4509", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:15:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:876b1428-1377-472c-b335-dfa9653f4509, vol_name:cephfs) < ""
Feb 01 15:15:16 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/876b1428-1377-472c-b335-dfa9653f4509/cd46af17-607e-4852-a03f-51369d24dcbc'.
Feb 01 15:15:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/876b1428-1377-472c-b335-dfa9653f4509/.meta.tmp'
Feb 01 15:15:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/876b1428-1377-472c-b335-dfa9653f4509/.meta.tmp' to config b'/volumes/_nogroup/876b1428-1377-472c-b335-dfa9653f4509/.meta'
Feb 01 15:15:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:876b1428-1377-472c-b335-dfa9653f4509, vol_name:cephfs) < ""
Feb 01 15:15:16 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "876b1428-1377-472c-b335-dfa9653f4509", "format": "json"}]: dispatch
Feb 01 15:15:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:876b1428-1377-472c-b335-dfa9653f4509, vol_name:cephfs) < ""
Feb 01 15:15:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:876b1428-1377-472c-b335-dfa9653f4509, vol_name:cephfs) < ""
Feb 01 15:15:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:15:16 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:15:17 compute-0 ceph-mon[75179]: pgmap v884: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 402 B/s rd, 44 KiB/s wr, 5 op/s
Feb 01 15:15:17 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "876b1428-1377-472c-b335-dfa9653f4509", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:15:17 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "876b1428-1377-472c-b335-dfa9653f4509", "format": "json"}]: dispatch
Feb 01 15:15:17 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:15:17 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7371576d-9b9d-4a2b-b2a0-dbf1c35daed8", "snap_name": "bac99b88-a326-4fdb-ac75-b388970b7d3b_9f047d51-0b94-405e-b75c-b64696ffced9", "force": true, "format": "json"}]: dispatch
Feb 01 15:15:17 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bac99b88-a326-4fdb-ac75-b388970b7d3b_9f047d51-0b94-405e-b75c-b64696ffced9, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb 01 15:15:17 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7371576d-9b9d-4a2b-b2a0-dbf1c35daed8/.meta.tmp'
Feb 01 15:15:17 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7371576d-9b9d-4a2b-b2a0-dbf1c35daed8/.meta.tmp' to config b'/volumes/_nogroup/7371576d-9b9d-4a2b-b2a0-dbf1c35daed8/.meta'
Feb 01 15:15:17 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bac99b88-a326-4fdb-ac75-b388970b7d3b_9f047d51-0b94-405e-b75c-b64696ffced9, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb 01 15:15:17 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7371576d-9b9d-4a2b-b2a0-dbf1c35daed8", "snap_name": "bac99b88-a326-4fdb-ac75-b388970b7d3b", "force": true, "format": "json"}]: dispatch
Feb 01 15:15:17 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bac99b88-a326-4fdb-ac75-b388970b7d3b, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb 01 15:15:17 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7371576d-9b9d-4a2b-b2a0-dbf1c35daed8/.meta.tmp'
Feb 01 15:15:17 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7371576d-9b9d-4a2b-b2a0-dbf1c35daed8/.meta.tmp' to config b'/volumes/_nogroup/7371576d-9b9d-4a2b-b2a0-dbf1c35daed8/.meta'
Feb 01 15:15:17 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bac99b88-a326-4fdb-ac75-b388970b7d3b, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb 01 15:15:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:15:17
Feb 01 15:15:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:15:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:15:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['default.rgw.log', 'backups', '.mgr', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'default.rgw.meta', 'images', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta']
Feb 01 15:15:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:15:18 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:15:18.228 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:64:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '76:0c:13:64:99:39'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 01 15:15:18 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:15:18.230 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 01 15:15:18 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 37 KiB/s wr, 4 op/s
Feb 01 15:15:18 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7371576d-9b9d-4a2b-b2a0-dbf1c35daed8", "snap_name": "bac99b88-a326-4fdb-ac75-b388970b7d3b_9f047d51-0b94-405e-b75c-b64696ffced9", "force": true, "format": "json"}]: dispatch
Feb 01 15:15:18 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7371576d-9b9d-4a2b-b2a0-dbf1c35daed8", "snap_name": "bac99b88-a326-4fdb-ac75-b388970b7d3b", "force": true, "format": "json"}]: dispatch
Feb 01 15:15:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:15:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:15:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:15:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:15:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:15:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:15:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:15:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:15:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:15:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:15:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:15:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:15:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:15:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:15:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:15:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:15:19 compute-0 ceph-mon[75179]: pgmap v885: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 37 KiB/s wr, 4 op/s
Feb 01 15:15:20 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "876b1428-1377-472c-b335-dfa9653f4509", "format": "json"}]: dispatch
Feb 01 15:15:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:876b1428-1377-472c-b335-dfa9653f4509, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:15:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:876b1428-1377-472c-b335-dfa9653f4509, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:15:20 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '876b1428-1377-472c-b335-dfa9653f4509' of type subvolume
Feb 01 15:15:20 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:20.303+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '876b1428-1377-472c-b335-dfa9653f4509' of type subvolume
Feb 01 15:15:20 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "876b1428-1377-472c-b335-dfa9653f4509", "force": true, "format": "json"}]: dispatch
Feb 01 15:15:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:876b1428-1377-472c-b335-dfa9653f4509, vol_name:cephfs) < ""
Feb 01 15:15:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/876b1428-1377-472c-b335-dfa9653f4509'' moved to trashcan
Feb 01 15:15:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:15:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:876b1428-1377-472c-b335-dfa9653f4509, vol_name:cephfs) < ""
Feb 01 15:15:20 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 37 KiB/s wr, 4 op/s
Feb 01 15:15:21 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7371576d-9b9d-4a2b-b2a0-dbf1c35daed8", "format": "json"}]: dispatch
Feb 01 15:15:21 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:15:21 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:15:21 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:21.109+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7371576d-9b9d-4a2b-b2a0-dbf1c35daed8' of type subvolume
Feb 01 15:15:21 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7371576d-9b9d-4a2b-b2a0-dbf1c35daed8' of type subvolume
Feb 01 15:15:21 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7371576d-9b9d-4a2b-b2a0-dbf1c35daed8", "force": true, "format": "json"}]: dispatch
Feb 01 15:15:21 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb 01 15:15:21 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7371576d-9b9d-4a2b-b2a0-dbf1c35daed8'' moved to trashcan
Feb 01 15:15:21 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:15:21 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7371576d-9b9d-4a2b-b2a0-dbf1c35daed8, vol_name:cephfs) < ""
Feb 01 15:15:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:15:21 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "876b1428-1377-472c-b335-dfa9653f4509", "format": "json"}]: dispatch
Feb 01 15:15:21 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "876b1428-1377-472c-b335-dfa9653f4509", "force": true, "format": "json"}]: dispatch
Feb 01 15:15:21 compute-0 ceph-mon[75179]: pgmap v886: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 37 KiB/s wr, 4 op/s
Feb 01 15:15:21 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7371576d-9b9d-4a2b-b2a0-dbf1c35daed8", "format": "json"}]: dispatch
Feb 01 15:15:22 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:15:22.232 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 01 15:15:22 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 59 KiB/s wr, 8 op/s
Feb 01 15:15:22 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7371576d-9b9d-4a2b-b2a0-dbf1c35daed8", "force": true, "format": "json"}]: dispatch
Feb 01 15:15:23 compute-0 ceph-mon[75179]: pgmap v887: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 59 KiB/s wr, 8 op/s
Feb 01 15:15:23 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4", "format": "json"}]: dispatch
Feb 01 15:15:23 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:15:23 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:15:23 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:23.904+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4' of type subvolume
Feb 01 15:15:23 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4' of type subvolume
Feb 01 15:15:23 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4", "force": true, "format": "json"}]: dispatch
Feb 01 15:15:23 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4, vol_name:cephfs) < ""
Feb 01 15:15:23 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4'' moved to trashcan
Feb 01 15:15:23 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:15:23 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4, vol_name:cephfs) < ""
Feb 01 15:15:24 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 41 KiB/s wr, 5 op/s
Feb 01 15:15:24 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4", "format": "json"}]: dispatch
Feb 01 15:15:24 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5dbda7ea-8a4a-45b6-808f-bba1d8cd95f4", "force": true, "format": "json"}]: dispatch
Feb 01 15:15:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Feb 01 15:15:25 compute-0 ceph-mon[75179]: pgmap v888: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 41 KiB/s wr, 5 op/s
Feb 01 15:15:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Feb 01 15:15:25 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Feb 01 15:15:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:15:26 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 48 KiB/s wr, 7 op/s
Feb 01 15:15:26 compute-0 ceph-mon[75179]: osdmap e145: 3 total, 3 up, 3 in
Feb 01 15:15:26 compute-0 podman[245580]: 2026-02-01 15:15:26.980929145 +0000 UTC m=+0.065593871 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 01 15:15:27 compute-0 podman[245581]: 2026-02-01 15:15:27.026396971 +0000 UTC m=+0.106602712 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller)
Feb 01 15:15:27 compute-0 ceph-mon[75179]: pgmap v890: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 48 KiB/s wr, 7 op/s
Feb 01 15:15:27 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f230efa5-4f47-4fa4-820a-fbfacc27744c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:15:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f230efa5-4f47-4fa4-820a-fbfacc27744c, vol_name:cephfs) < ""
Feb 01 15:15:27 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/f230efa5-4f47-4fa4-820a-fbfacc27744c/3d7a4d56-3b54-44b4-b837-eb1b19c5bef9'.
Feb 01 15:15:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f230efa5-4f47-4fa4-820a-fbfacc27744c/.meta.tmp'
Feb 01 15:15:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f230efa5-4f47-4fa4-820a-fbfacc27744c/.meta.tmp' to config b'/volumes/_nogroup/f230efa5-4f47-4fa4-820a-fbfacc27744c/.meta'
Feb 01 15:15:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f230efa5-4f47-4fa4-820a-fbfacc27744c, vol_name:cephfs) < ""
Feb 01 15:15:27 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f230efa5-4f47-4fa4-820a-fbfacc27744c", "format": "json"}]: dispatch
Feb 01 15:15:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f230efa5-4f47-4fa4-820a-fbfacc27744c, vol_name:cephfs) < ""
Feb 01 15:15:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f230efa5-4f47-4fa4-820a-fbfacc27744c, vol_name:cephfs) < ""
Feb 01 15:15:27 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:15:27 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659521351430677 of space, bias 1.0, pg target 0.1997856405429203 quantized to 32 (current 32)
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.898622621989711e-05 of space, bias 4.0, pg target 0.09478347146387653 quantized to 16 (current 16)
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:15:28 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 48 KiB/s wr, 7 op/s
Feb 01 15:15:28 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f230efa5-4f47-4fa4-820a-fbfacc27744c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:15:28 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f230efa5-4f47-4fa4-820a-fbfacc27744c", "format": "json"}]: dispatch
Feb 01 15:15:28 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:15:29 compute-0 ceph-mon[75179]: pgmap v891: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 48 KiB/s wr, 7 op/s
Feb 01 15:15:30 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 48 KiB/s wr, 7 op/s
Feb 01 15:15:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:15:31 compute-0 ceph-mon[75179]: pgmap v892: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 48 KiB/s wr, 7 op/s
Feb 01 15:15:31 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f230efa5-4f47-4fa4-820a-fbfacc27744c", "format": "json"}]: dispatch
Feb 01 15:15:31 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f230efa5-4f47-4fa4-820a-fbfacc27744c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:15:31 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f230efa5-4f47-4fa4-820a-fbfacc27744c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:15:31 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:31.779+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f230efa5-4f47-4fa4-820a-fbfacc27744c' of type subvolume
Feb 01 15:15:31 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f230efa5-4f47-4fa4-820a-fbfacc27744c' of type subvolume
Feb 01 15:15:31 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f230efa5-4f47-4fa4-820a-fbfacc27744c", "force": true, "format": "json"}]: dispatch
Feb 01 15:15:31 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f230efa5-4f47-4fa4-820a-fbfacc27744c, vol_name:cephfs) < ""
Feb 01 15:15:31 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f230efa5-4f47-4fa4-820a-fbfacc27744c'' moved to trashcan
Feb 01 15:15:31 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:15:31 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f230efa5-4f47-4fa4-820a-fbfacc27744c, vol_name:cephfs) < ""
Feb 01 15:15:32 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 35 KiB/s wr, 5 op/s
Feb 01 15:15:32 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f230efa5-4f47-4fa4-820a-fbfacc27744c", "format": "json"}]: dispatch
Feb 01 15:15:32 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f230efa5-4f47-4fa4-820a-fbfacc27744c", "force": true, "format": "json"}]: dispatch
Feb 01 15:15:33 compute-0 ceph-mon[75179]: pgmap v893: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 35 KiB/s wr, 5 op/s
Feb 01 15:15:34 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 35 KiB/s wr, 5 op/s
Feb 01 15:15:35 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:15:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb 01 15:15:35 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720/aab2a1e1-5b57-40ad-8d7a-9d89f95d2b23'.
Feb 01 15:15:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720/.meta.tmp'
Feb 01 15:15:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720/.meta.tmp' to config b'/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720/.meta'
Feb 01 15:15:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb 01 15:15:35 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "format": "json"}]: dispatch
Feb 01 15:15:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb 01 15:15:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb 01 15:15:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:15:35 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:15:35 compute-0 ceph-mon[75179]: pgmap v894: 305 pgs: 305 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 35 KiB/s wr, 5 op/s
Feb 01 15:15:35 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:15:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:15:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Feb 01 15:15:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Feb 01 15:15:36 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Feb 01 15:15:36 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 46 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 33 KiB/s wr, 4 op/s
Feb 01 15:15:36 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:15:36 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "format": "json"}]: dispatch
Feb 01 15:15:36 compute-0 ceph-mon[75179]: osdmap e146: 3 total, 3 up, 3 in
Feb 01 15:15:37 compute-0 ceph-mon[75179]: pgmap v896: 305 pgs: 305 active+clean; 46 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 33 KiB/s wr, 4 op/s
Feb 01 15:15:38 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 46 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 33 KiB/s wr, 4 op/s
Feb 01 15:15:39 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:15:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, vol_name:cephfs) < ""
Feb 01 15:15:39 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8/225e29d2-d9a9-491f-bae7-2cbc01e3d01a'.
Feb 01 15:15:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8/.meta.tmp'
Feb 01 15:15:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8/.meta.tmp' to config b'/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8/.meta'
Feb 01 15:15:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, vol_name:cephfs) < ""
Feb 01 15:15:39 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "format": "json"}]: dispatch
Feb 01 15:15:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, vol_name:cephfs) < ""
Feb 01 15:15:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, vol_name:cephfs) < ""
Feb 01 15:15:39 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:15:39 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:15:39 compute-0 ceph-mon[75179]: pgmap v897: 305 pgs: 305 active+clean; 46 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 33 KiB/s wr, 4 op/s
Feb 01 15:15:39 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:15:40 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 46 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 33 KiB/s wr, 4 op/s
Feb 01 15:15:40 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:15:40 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "format": "json"}]: dispatch
Feb 01 15:15:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:15:41 compute-0 ceph-mon[75179]: pgmap v898: 305 pgs: 305 active+clean; 46 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 33 KiB/s wr, 4 op/s
Feb 01 15:15:42 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 46 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 34 KiB/s wr, 4 op/s
Feb 01 15:15:42 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "auth_id": "Joe", "tenant_id": "e483891a9fd042d4a571a3d4655dc685", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:15:42 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, tenant_id:e483891a9fd042d4a571a3d4655dc685, vol_name:cephfs) < ""
Feb 01 15:15:42 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Feb 01 15:15:42 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Feb 01 15:15:42 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID Joe with tenant e483891a9fd042d4a571a3d4655dc685
Feb 01 15:15:42 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8/225e29d2-d9a9-491f-bae7-2cbc01e3d01a", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:15:42 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8/225e29d2-d9a9-491f-bae7-2cbc01e3d01a", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:15:42 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8/225e29d2-d9a9-491f-bae7-2cbc01e3d01a", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:15:42 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, tenant_id:e483891a9fd042d4a571a3d4655dc685, vol_name:cephfs) < ""
Feb 01 15:15:43 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:15:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb 01 15:15:43 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb'.
Feb 01 15:15:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/.meta.tmp'
Feb 01 15:15:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/.meta.tmp' to config b'/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/.meta'
Feb 01 15:15:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb 01 15:15:43 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "format": "json"}]: dispatch
Feb 01 15:15:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb 01 15:15:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb 01 15:15:43 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:15:43 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:15:43 compute-0 ceph-mon[75179]: pgmap v899: 305 pgs: 305 active+clean; 46 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 34 KiB/s wr, 4 op/s
Feb 01 15:15:43 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "auth_id": "Joe", "tenant_id": "e483891a9fd042d4a571a3d4655dc685", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:15:43 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Feb 01 15:15:43 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8/225e29d2-d9a9-491f-bae7-2cbc01e3d01a", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:15:43 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8/225e29d2-d9a9-491f-bae7-2cbc01e3d01a", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:15:43 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:15:43 compute-0 sudo[245626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:15:43 compute-0 sudo[245626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:15:43 compute-0 sudo[245626]: pam_unix(sudo:session): session closed for user root
Feb 01 15:15:43 compute-0 sudo[245651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 15:15:43 compute-0 sudo[245651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:15:44 compute-0 sudo[245651]: pam_unix(sudo:session): session closed for user root
Feb 01 15:15:44 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 46 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 34 KiB/s wr, 4 op/s
Feb 01 15:15:44 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:15:44 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:15:44 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 15:15:44 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:15:44 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 15:15:44 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:15:44 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 15:15:44 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:15:44 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 15:15:44 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:15:44 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:15:44 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:15:44 compute-0 sudo[245706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:15:44 compute-0 sudo[245706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:15:44 compute-0 sudo[245706]: pam_unix(sudo:session): session closed for user root
Feb 01 15:15:44 compute-0 sudo[245731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 15:15:44 compute-0 sudo[245731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:15:44 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:15:44 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "format": "json"}]: dispatch
Feb 01 15:15:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:15:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:15:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:15:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:15:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:15:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:15:44 compute-0 podman[245768]: 2026-02-01 15:15:44.673664687 +0000 UTC m=+0.046987073 container create 59f6f76303283fd708c44680fba8fba9ae9ba73f65332b15de2574ea1136cba3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Feb 01 15:15:44 compute-0 systemd[1]: Started libpod-conmon-59f6f76303283fd708c44680fba8fba9ae9ba73f65332b15de2574ea1136cba3.scope.
Feb 01 15:15:44 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:15:44 compute-0 podman[245768]: 2026-02-01 15:15:44.719707952 +0000 UTC m=+0.093030358 container init 59f6f76303283fd708c44680fba8fba9ae9ba73f65332b15de2574ea1136cba3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackburn, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:15:44 compute-0 podman[245768]: 2026-02-01 15:15:44.724983251 +0000 UTC m=+0.098305637 container start 59f6f76303283fd708c44680fba8fba9ae9ba73f65332b15de2574ea1136cba3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackburn, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:15:44 compute-0 frosty_blackburn[245783]: 167 167
Feb 01 15:15:44 compute-0 systemd[1]: libpod-59f6f76303283fd708c44680fba8fba9ae9ba73f65332b15de2574ea1136cba3.scope: Deactivated successfully.
Feb 01 15:15:44 compute-0 podman[245768]: 2026-02-01 15:15:44.72816744 +0000 UTC m=+0.101489816 container attach 59f6f76303283fd708c44680fba8fba9ae9ba73f65332b15de2574ea1136cba3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackburn, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 01 15:15:44 compute-0 podman[245768]: 2026-02-01 15:15:44.729015504 +0000 UTC m=+0.102337890 container died 59f6f76303283fd708c44680fba8fba9ae9ba73f65332b15de2574ea1136cba3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackburn, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:15:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0ca148d47fbfda43051201434aed814ae73902d0732fc14c9c56803f3bfe286-merged.mount: Deactivated successfully.
Feb 01 15:15:44 compute-0 podman[245768]: 2026-02-01 15:15:44.655787834 +0000 UTC m=+0.029110240 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:15:44 compute-0 podman[245768]: 2026-02-01 15:15:44.763126424 +0000 UTC m=+0.136448810 container remove 59f6f76303283fd708c44680fba8fba9ae9ba73f65332b15de2574ea1136cba3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:15:44 compute-0 systemd[1]: libpod-conmon-59f6f76303283fd708c44680fba8fba9ae9ba73f65332b15de2574ea1136cba3.scope: Deactivated successfully.
Feb 01 15:15:44 compute-0 podman[245806]: 2026-02-01 15:15:44.88353298 +0000 UTC m=+0.039569944 container create 774547ff8236ed932b18cd3c8b676b3ba0df1ce4d933b3ce74eb13b085f8e1b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_rosalind, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:15:44 compute-0 systemd[1]: Started libpod-conmon-774547ff8236ed932b18cd3c8b676b3ba0df1ce4d933b3ce74eb13b085f8e1b3.scope.
Feb 01 15:15:44 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b9d1bbe62e7bbab3b33fb72341e5485cac429f7575fed967e1b597039c617d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b9d1bbe62e7bbab3b33fb72341e5485cac429f7575fed967e1b597039c617d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b9d1bbe62e7bbab3b33fb72341e5485cac429f7575fed967e1b597039c617d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b9d1bbe62e7bbab3b33fb72341e5485cac429f7575fed967e1b597039c617d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b9d1bbe62e7bbab3b33fb72341e5485cac429f7575fed967e1b597039c617d0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 15:15:44 compute-0 podman[245806]: 2026-02-01 15:15:44.944398782 +0000 UTC m=+0.100435796 container init 774547ff8236ed932b18cd3c8b676b3ba0df1ce4d933b3ce74eb13b085f8e1b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 01 15:15:44 compute-0 podman[245806]: 2026-02-01 15:15:44.949011722 +0000 UTC m=+0.105048676 container start 774547ff8236ed932b18cd3c8b676b3ba0df1ce4d933b3ce74eb13b085f8e1b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 01 15:15:44 compute-0 podman[245806]: 2026-02-01 15:15:44.953878439 +0000 UTC m=+0.109915503 container attach 774547ff8236ed932b18cd3c8b676b3ba0df1ce4d933b3ce74eb13b085f8e1b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 01 15:15:44 compute-0 podman[245806]: 2026-02-01 15:15:44.866264914 +0000 UTC m=+0.022301938 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:15:45 compute-0 adoring_rosalind[245822]: --> passed data devices: 0 physical, 3 LVM
Feb 01 15:15:45 compute-0 adoring_rosalind[245822]: --> All data devices are unavailable
Feb 01 15:15:45 compute-0 systemd[1]: libpod-774547ff8236ed932b18cd3c8b676b3ba0df1ce4d933b3ce74eb13b085f8e1b3.scope: Deactivated successfully.
Feb 01 15:15:45 compute-0 podman[245806]: 2026-02-01 15:15:45.316721177 +0000 UTC m=+0.472758141 container died 774547ff8236ed932b18cd3c8b676b3ba0df1ce4d933b3ce74eb13b085f8e1b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_rosalind, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:15:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b9d1bbe62e7bbab3b33fb72341e5485cac429f7575fed967e1b597039c617d0-merged.mount: Deactivated successfully.
Feb 01 15:15:45 compute-0 podman[245806]: 2026-02-01 15:15:45.359314105 +0000 UTC m=+0.515351069 container remove 774547ff8236ed932b18cd3c8b676b3ba0df1ce4d933b3ce74eb13b085f8e1b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_rosalind, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 01 15:15:45 compute-0 systemd[1]: libpod-conmon-774547ff8236ed932b18cd3c8b676b3ba0df1ce4d933b3ce74eb13b085f8e1b3.scope: Deactivated successfully.
Feb 01 15:15:45 compute-0 sudo[245731]: pam_unix(sudo:session): session closed for user root
Feb 01 15:15:45 compute-0 sudo[245854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:15:45 compute-0 sudo[245854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:15:45 compute-0 sudo[245854]: pam_unix(sudo:session): session closed for user root
Feb 01 15:15:45 compute-0 sudo[245879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 15:15:45 compute-0 sudo[245879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:15:45 compute-0 ceph-mon[75179]: pgmap v900: 305 pgs: 305 active+clean; 46 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 34 KiB/s wr, 4 op/s
Feb 01 15:15:45 compute-0 podman[245914]: 2026-02-01 15:15:45.749290556 +0000 UTC m=+0.037114165 container create c1cde0c14b34b768c5082f863fed77cdd145a8ecfbc4bcde3b4b5940d20acdce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cartwright, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 01 15:15:45 compute-0 systemd[1]: Started libpod-conmon-c1cde0c14b34b768c5082f863fed77cdd145a8ecfbc4bcde3b4b5940d20acdce.scope.
Feb 01 15:15:45 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:15:45 compute-0 podman[245914]: 2026-02-01 15:15:45.815692874 +0000 UTC m=+0.103516493 container init c1cde0c14b34b768c5082f863fed77cdd145a8ecfbc4bcde3b4b5940d20acdce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cartwright, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:15:45 compute-0 podman[245914]: 2026-02-01 15:15:45.821129597 +0000 UTC m=+0.108953256 container start c1cde0c14b34b768c5082f863fed77cdd145a8ecfbc4bcde3b4b5940d20acdce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:15:45 compute-0 systemd[1]: libpod-c1cde0c14b34b768c5082f863fed77cdd145a8ecfbc4bcde3b4b5940d20acdce.scope: Deactivated successfully.
Feb 01 15:15:45 compute-0 happy_cartwright[245930]: 167 167
Feb 01 15:15:45 compute-0 conmon[245930]: conmon c1cde0c14b34b768c508 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c1cde0c14b34b768c5082f863fed77cdd145a8ecfbc4bcde3b4b5940d20acdce.scope/container/memory.events
Feb 01 15:15:45 compute-0 podman[245914]: 2026-02-01 15:15:45.734470829 +0000 UTC m=+0.022294478 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:15:45 compute-0 podman[245914]: 2026-02-01 15:15:45.872485301 +0000 UTC m=+0.160308930 container attach c1cde0c14b34b768c5082f863fed77cdd145a8ecfbc4bcde3b4b5940d20acdce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 01 15:15:45 compute-0 podman[245914]: 2026-02-01 15:15:45.873049917 +0000 UTC m=+0.160873556 container died c1cde0c14b34b768c5082f863fed77cdd145a8ecfbc4bcde3b4b5940d20acdce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 01 15:15:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-db7ef22c5e7056c085f3644c55d68a49b8fe91b47cb51abaf94a6a6328ea6651-merged.mount: Deactivated successfully.
Feb 01 15:15:46 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:15:46 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb 01 15:15:46 compute-0 podman[245914]: 2026-02-01 15:15:46.139012739 +0000 UTC m=+0.426836348 container remove c1cde0c14b34b768c5082f863fed77cdd145a8ecfbc4bcde3b4b5940d20acdce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cartwright, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:15:46 compute-0 systemd[1]: libpod-conmon-c1cde0c14b34b768c5082f863fed77cdd145a8ecfbc4bcde3b4b5940d20acdce.scope: Deactivated successfully.
Feb 01 15:15:46 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/8e410b52-d8de-4cae-8508-0fb58ac5241f'.
Feb 01 15:15:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:15:46 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/.meta.tmp'
Feb 01 15:15:46 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/.meta.tmp' to config b'/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/.meta'
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.164414) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958946164451, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 894, "num_deletes": 258, "total_data_size": 969731, "memory_usage": 986024, "flush_reason": "Manual Compaction"}
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Feb 01 15:15:46 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958946170493, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 958900, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18634, "largest_seqno": 19527, "table_properties": {"data_size": 954466, "index_size": 1958, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10584, "raw_average_key_size": 19, "raw_value_size": 945042, "raw_average_value_size": 1730, "num_data_blocks": 88, "num_entries": 546, "num_filter_entries": 546, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769958889, "oldest_key_time": 1769958889, "file_creation_time": 1769958946, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 6108 microseconds, and 2771 cpu microseconds.
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.170525) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 958900 bytes OK
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.170537) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.172402) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.172414) EVENT_LOG_v1 {"time_micros": 1769958946172410, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.172427) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 965183, prev total WAL file size 965183, number of live WAL files 2.
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.172711) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(936KB)], [41(9332KB)]
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958946172748, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 10515328, "oldest_snapshot_seqno": -1}
Feb 01 15:15:46 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "format": "json"}]: dispatch
Feb 01 15:15:46 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb 01 15:15:46 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb 01 15:15:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:15:46 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4614 keys, 10396351 bytes, temperature: kUnknown
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958946234216, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 10396351, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10360294, "index_size": 23403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 113351, "raw_average_key_size": 24, "raw_value_size": 10271967, "raw_average_value_size": 2226, "num_data_blocks": 986, "num_entries": 4614, "num_filter_entries": 4614, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769958946, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.234476) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 10396351 bytes
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.235979) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.7 rd, 168.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.1 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(21.8) write-amplify(10.8) OK, records in: 5146, records dropped: 532 output_compression: NoCompression
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.236000) EVENT_LOG_v1 {"time_micros": 1769958946235990, "job": 20, "event": "compaction_finished", "compaction_time_micros": 61587, "compaction_time_cpu_micros": 14606, "output_level": 6, "num_output_files": 1, "total_output_size": 10396351, "num_input_records": 5146, "num_output_records": 4614, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958946236187, "job": 20, "event": "table_file_deletion", "file_number": 43}
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958946237055, "job": 20, "event": "table_file_deletion", "file_number": 41}
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.172641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.237083) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.237122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.237124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.237126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:15:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:46.237129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:15:46 compute-0 podman[245957]: 2026-02-01 15:15:46.258217373 +0000 UTC m=+0.029747498 container create 3d9ecad10558a667c22c06758deb714acea8346b8095813301cd40c6b73b0531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jackson, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 01 15:15:46 compute-0 systemd[1]: Started libpod-conmon-3d9ecad10558a667c22c06758deb714acea8346b8095813301cd40c6b73b0531.scope.
Feb 01 15:15:46 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85a2f3942c79a9508cf0bb6fc7adf1d93153e7749935d6f0f0af41384c46d2c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85a2f3942c79a9508cf0bb6fc7adf1d93153e7749935d6f0f0af41384c46d2c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85a2f3942c79a9508cf0bb6fc7adf1d93153e7749935d6f0f0af41384c46d2c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85a2f3942c79a9508cf0bb6fc7adf1d93153e7749935d6f0f0af41384c46d2c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:15:46 compute-0 podman[245957]: 2026-02-01 15:15:46.330455435 +0000 UTC m=+0.101985570 container init 3d9ecad10558a667c22c06758deb714acea8346b8095813301cd40c6b73b0531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 01 15:15:46 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s wr, 4 op/s
Feb 01 15:15:46 compute-0 podman[245957]: 2026-02-01 15:15:46.340316122 +0000 UTC m=+0.111846257 container start 3d9ecad10558a667c22c06758deb714acea8346b8095813301cd40c6b73b0531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:15:46 compute-0 podman[245957]: 2026-02-01 15:15:46.343456101 +0000 UTC m=+0.114986256 container attach 3d9ecad10558a667c22c06758deb714acea8346b8095813301cd40c6b73b0531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:15:46 compute-0 podman[245957]: 2026-02-01 15:15:46.246131403 +0000 UTC m=+0.017661558 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]: {
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:     "0": [
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:         {
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "devices": [
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "/dev/loop3"
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             ],
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "lv_name": "ceph_lv0",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "lv_size": "21470642176",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "name": "ceph_lv0",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "tags": {
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.cluster_name": "ceph",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.crush_device_class": "",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.encrypted": "0",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.objectstore": "bluestore",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.osd_id": "0",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.type": "block",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.vdo": "0",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.with_tpm": "0"
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             },
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "type": "block",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "vg_name": "ceph_vg0"
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:         }
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:     ],
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:     "1": [
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:         {
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "devices": [
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "/dev/loop4"
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             ],
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "lv_name": "ceph_lv1",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "lv_size": "21470642176",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "name": "ceph_lv1",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "tags": {
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.cluster_name": "ceph",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.crush_device_class": "",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.encrypted": "0",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.objectstore": "bluestore",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.osd_id": "1",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.type": "block",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.vdo": "0",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.with_tpm": "0"
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             },
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "type": "block",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "vg_name": "ceph_vg1"
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:         }
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:     ],
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:     "2": [
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:         {
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "devices": [
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "/dev/loop5"
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             ],
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "lv_name": "ceph_lv2",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "lv_size": "21470642176",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "name": "ceph_lv2",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "tags": {
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.cluster_name": "ceph",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.crush_device_class": "",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.encrypted": "0",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.objectstore": "bluestore",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.osd_id": "2",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.type": "block",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.vdo": "0",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:                 "ceph.with_tpm": "0"
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             },
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "type": "block",
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:             "vg_name": "ceph_vg2"
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:         }
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]:     ]
Feb 01 15:15:46 compute-0 ecstatic_jackson[245974]: }
Feb 01 15:15:46 compute-0 systemd[1]: libpod-3d9ecad10558a667c22c06758deb714acea8346b8095813301cd40c6b73b0531.scope: Deactivated successfully.
Feb 01 15:15:46 compute-0 podman[245957]: 2026-02-01 15:15:46.624832916 +0000 UTC m=+0.396363041 container died 3d9ecad10558a667c22c06758deb714acea8346b8095813301cd40c6b73b0531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:15:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-85a2f3942c79a9508cf0bb6fc7adf1d93153e7749935d6f0f0af41384c46d2c1-merged.mount: Deactivated successfully.
Feb 01 15:15:46 compute-0 podman[245957]: 2026-02-01 15:15:46.659966235 +0000 UTC m=+0.431496370 container remove 3d9ecad10558a667c22c06758deb714acea8346b8095813301cd40c6b73b0531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jackson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:15:46 compute-0 systemd[1]: libpod-conmon-3d9ecad10558a667c22c06758deb714acea8346b8095813301cd40c6b73b0531.scope: Deactivated successfully.
Feb 01 15:15:46 compute-0 sudo[245879]: pam_unix(sudo:session): session closed for user root
Feb 01 15:15:46 compute-0 sudo[245998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:15:46 compute-0 sudo[245998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:15:46 compute-0 sudo[245998]: pam_unix(sudo:session): session closed for user root
Feb 01 15:15:46 compute-0 sudo[246023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 15:15:46 compute-0 sudo[246023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:15:46 compute-0 podman[246060]: 2026-02-01 15:15:46.990636267 +0000 UTC m=+0.035748897 container create 935e271d445795dcdad9a1c4b6105e8ae861af5b5b2e7d5c71a43be47140fd92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_jang, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:15:47 compute-0 systemd[1]: Started libpod-conmon-935e271d445795dcdad9a1c4b6105e8ae861af5b5b2e7d5c71a43be47140fd92.scope.
Feb 01 15:15:47 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:15:47 compute-0 podman[246060]: 2026-02-01 15:15:47.053004082 +0000 UTC m=+0.098116752 container init 935e271d445795dcdad9a1c4b6105e8ae861af5b5b2e7d5c71a43be47140fd92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_jang, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 01 15:15:47 compute-0 podman[246060]: 2026-02-01 15:15:47.057689113 +0000 UTC m=+0.102801753 container start 935e271d445795dcdad9a1c4b6105e8ae861af5b5b2e7d5c71a43be47140fd92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_jang, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:15:47 compute-0 sharp_jang[246077]: 167 167
Feb 01 15:15:47 compute-0 systemd[1]: libpod-935e271d445795dcdad9a1c4b6105e8ae861af5b5b2e7d5c71a43be47140fd92.scope: Deactivated successfully.
Feb 01 15:15:47 compute-0 podman[246060]: 2026-02-01 15:15:47.061351776 +0000 UTC m=+0.106464446 container attach 935e271d445795dcdad9a1c4b6105e8ae861af5b5b2e7d5c71a43be47140fd92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_jang, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:15:47 compute-0 podman[246060]: 2026-02-01 15:15:47.061577383 +0000 UTC m=+0.106690013 container died 935e271d445795dcdad9a1c4b6105e8ae861af5b5b2e7d5c71a43be47140fd92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:15:47 compute-0 podman[246060]: 2026-02-01 15:15:46.97615595 +0000 UTC m=+0.021268600 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:15:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d063ab31ca2cf49dc757f7992d4492b5869eba17454dcc8ea1eafa35bbffb8a-merged.mount: Deactivated successfully.
Feb 01 15:15:47 compute-0 podman[246060]: 2026-02-01 15:15:47.100247001 +0000 UTC m=+0.145359631 container remove 935e271d445795dcdad9a1c4b6105e8ae861af5b5b2e7d5c71a43be47140fd92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:15:47 compute-0 systemd[1]: libpod-conmon-935e271d445795dcdad9a1c4b6105e8ae861af5b5b2e7d5c71a43be47140fd92.scope: Deactivated successfully.
Feb 01 15:15:47 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:15:47 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "format": "json"}]: dispatch
Feb 01 15:15:47 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:15:47 compute-0 ceph-mon[75179]: pgmap v901: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s wr, 4 op/s
Feb 01 15:15:47 compute-0 podman[246102]: 2026-02-01 15:15:47.234215889 +0000 UTC m=+0.045524501 container create 0ec2a569a885da40edc1677121c1bbeb567f6f037cfdb130463f54627072a827 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 01 15:15:47 compute-0 systemd[1]: Started libpod-conmon-0ec2a569a885da40edc1677121c1bbeb567f6f037cfdb130463f54627072a827.scope.
Feb 01 15:15:47 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91185da7210f237de3742dfb696ba736213549b1dd0a14a93dd1d9f626c60955/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91185da7210f237de3742dfb696ba736213549b1dd0a14a93dd1d9f626c60955/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91185da7210f237de3742dfb696ba736213549b1dd0a14a93dd1d9f626c60955/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91185da7210f237de3742dfb696ba736213549b1dd0a14a93dd1d9f626c60955/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:15:47 compute-0 podman[246102]: 2026-02-01 15:15:47.313936492 +0000 UTC m=+0.125245114 container init 0ec2a569a885da40edc1677121c1bbeb567f6f037cfdb130463f54627072a827 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 01 15:15:47 compute-0 podman[246102]: 2026-02-01 15:15:47.218751134 +0000 UTC m=+0.030059776 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:15:47 compute-0 podman[246102]: 2026-02-01 15:15:47.324095568 +0000 UTC m=+0.135404210 container start 0ec2a569a885da40edc1677121c1bbeb567f6f037cfdb130463f54627072a827 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 01 15:15:47 compute-0 podman[246102]: 2026-02-01 15:15:47.327927696 +0000 UTC m=+0.139236298 container attach 0ec2a569a885da40edc1677121c1bbeb567f6f037cfdb130463f54627072a827 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_agnesi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:15:47 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:15:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, vol_name:cephfs) < ""
Feb 01 15:15:47 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222/895c1713-5e9a-4aa7-9027-9cea3dd8b5ea'.
Feb 01 15:15:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222/.meta.tmp'
Feb 01 15:15:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222/.meta.tmp' to config b'/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222/.meta'
Feb 01 15:15:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, vol_name:cephfs) < ""
Feb 01 15:15:47 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "format": "json"}]: dispatch
Feb 01 15:15:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, vol_name:cephfs) < ""
Feb 01 15:15:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, vol_name:cephfs) < ""
Feb 01 15:15:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:15:47 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:15:47 compute-0 nova_compute[238794]: 2026-02-01 15:15:47.558 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:15:47 compute-0 nova_compute[238794]: 2026-02-01 15:15:47.559 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:15:47 compute-0 nova_compute[238794]: 2026-02-01 15:15:47.559 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 01 15:15:47 compute-0 nova_compute[238794]: 2026-02-01 15:15:47.559 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 01 15:15:47 compute-0 nova_compute[238794]: 2026-02-01 15:15:47.575 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 01 15:15:47 compute-0 nova_compute[238794]: 2026-02-01 15:15:47.575 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:15:47 compute-0 lvm[246195]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:15:47 compute-0 lvm[246195]: VG ceph_vg0 finished
Feb 01 15:15:47 compute-0 lvm[246198]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:15:47 compute-0 lvm[246198]: VG ceph_vg1 finished
Feb 01 15:15:47 compute-0 lvm[246200]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:15:47 compute-0 lvm[246200]: VG ceph_vg2 finished
Feb 01 15:15:48 compute-0 fervent_agnesi[246119]: {}
Feb 01 15:15:48 compute-0 systemd[1]: libpod-0ec2a569a885da40edc1677121c1bbeb567f6f037cfdb130463f54627072a827.scope: Deactivated successfully.
Feb 01 15:15:48 compute-0 systemd[1]: libpod-0ec2a569a885da40edc1677121c1bbeb567f6f037cfdb130463f54627072a827.scope: Consumed 1.057s CPU time.
Feb 01 15:15:48 compute-0 podman[246102]: 2026-02-01 15:15:48.045932695 +0000 UTC m=+0.857241317 container died 0ec2a569a885da40edc1677121c1bbeb567f6f037cfdb130463f54627072a827 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_agnesi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 01 15:15:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-91185da7210f237de3742dfb696ba736213549b1dd0a14a93dd1d9f626c60955-merged.mount: Deactivated successfully.
Feb 01 15:15:48 compute-0 podman[246102]: 2026-02-01 15:15:48.082960836 +0000 UTC m=+0.894269438 container remove 0ec2a569a885da40edc1677121c1bbeb567f6f037cfdb130463f54627072a827 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_agnesi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:15:48 compute-0 systemd[1]: libpod-conmon-0ec2a569a885da40edc1677121c1bbeb567f6f037cfdb130463f54627072a827.scope: Deactivated successfully.
Feb 01 15:15:48 compute-0 sudo[246023]: pam_unix(sudo:session): session closed for user root
Feb 01 15:15:48 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:15:48 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:15:48 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:15:48 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:15:48 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:15:48 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "format": "json"}]: dispatch
Feb 01 15:15:48 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:15:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:15:48 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:15:48 compute-0 sudo[246214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 15:15:48 compute-0 sudo[246214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:15:48 compute-0 sudo[246214]: pam_unix(sudo:session): session closed for user root
Feb 01 15:15:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s wr, 3 op/s
Feb 01 15:15:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:15:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:15:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:15:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:15:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:15:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:15:49 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:15:49 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb 01 15:15:49 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/121881e8-6836-4fd0-8d00-03d9039e7468'.
Feb 01 15:15:49 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta.tmp'
Feb 01 15:15:49 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta.tmp' to config b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta'
Feb 01 15:15:49 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb 01 15:15:49 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "format": "json"}]: dispatch
Feb 01 15:15:49 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb 01 15:15:49 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb 01 15:15:49 compute-0 ceph-mon[75179]: pgmap v902: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s wr, 3 op/s
Feb 01 15:15:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:15:49 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:15:49 compute-0 nova_compute[238794]: 2026-02-01 15:15:49.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:15:49 compute-0 nova_compute[238794]: 2026-02-01 15:15:49.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:15:49 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "auth_id": "Joe", "tenant_id": "2731ddbed05046f3bee55c8f307163b2", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:15:49 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, tenant_id:2731ddbed05046f3bee55c8f307163b2, vol_name:cephfs) < ""
Feb 01 15:15:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Feb 01 15:15:49 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Feb 01 15:15:49 compute-0 ceph-mgr[75469]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: Joe is already in use
Feb 01 15:15:49 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, tenant_id:2731ddbed05046f3bee55c8f307163b2, vol_name:cephfs) < ""
Feb 01 15:15:49 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:49.666+0000 7f8267782640 -1 mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Feb 01 15:15:49 compute-0 ceph-mgr[75469]: mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Feb 01 15:15:50 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:15:50 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "format": "json"}]: dispatch
Feb 01 15:15:50 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:15:50 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "auth_id": "Joe", "tenant_id": "2731ddbed05046f3bee55c8f307163b2", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:15:50 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Feb 01 15:15:50 compute-0 nova_compute[238794]: 2026-02-01 15:15:50.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:15:50 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s wr, 3 op/s
Feb 01 15:15:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 01 15:15:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2222637731' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:15:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 01 15:15:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2222637731' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:15:50 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "auth_id": "tempest-cephx-id-1870793908", "tenant_id": "f99925486e924480b84b05e1433af949", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:15:50 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb 01 15:15:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb 01 15:15:51 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:15:51 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID tempest-cephx-id-1870793908 with tenant f99925486e924480b84b05e1433af949
Feb 01 15:15:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222/895c1713-5e9a-4aa7-9027-9cea3dd8b5ea", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:15:51 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222/895c1713-5e9a-4aa7-9027-9cea3dd8b5ea", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:15:51 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222/895c1713-5e9a-4aa7-9027-9cea3dd8b5ea", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:15:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb 01 15:15:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:15:51 compute-0 ceph-mon[75179]: pgmap v903: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s wr, 3 op/s
Feb 01 15:15:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/2222637731' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:15:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/2222637731' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:15:51 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "auth_id": "tempest-cephx-id-1870793908", "tenant_id": "f99925486e924480b84b05e1433af949", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:15:51 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:15:51 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222/895c1713-5e9a-4aa7-9027-9cea3dd8b5ea", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:15:51 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222/895c1713-5e9a-4aa7-9027-9cea3dd8b5ea", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:15:51 compute-0 nova_compute[238794]: 2026-02-01 15:15:51.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:15:51 compute-0 nova_compute[238794]: 2026-02-01 15:15:51.319 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 01 15:15:52 compute-0 nova_compute[238794]: 2026-02-01 15:15:52.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:15:52 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s wr, 6 op/s
Feb 01 15:15:52 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "snap_name": "09169dc3-0948-42ec-b7eb-9bb0391d7a50", "format": "json"}]: dispatch
Feb 01 15:15:52 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:09169dc3-0948-42ec-b7eb-9bb0391d7a50, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb 01 15:15:52 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 01 15:15:52 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:09169dc3-0948-42ec-b7eb-9bb0391d7a50, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb 01 15:15:53 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "auth_id": "tempest-cephx-id-403687319", "tenant_id": "2731ddbed05046f3bee55c8f307163b2", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:15:53 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-403687319, format:json, prefix:fs subvolume authorize, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, tenant_id:2731ddbed05046f3bee55c8f307163b2, vol_name:cephfs) < ""
Feb 01 15:15:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-403687319", "format": "json"} v 0)
Feb 01 15:15:53 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-403687319", "format": "json"} : dispatch
Feb 01 15:15:53 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID tempest-cephx-id-403687319 with tenant 2731ddbed05046f3bee55c8f307163b2
Feb 01 15:15:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-403687319", "caps": ["mds", "allow rw path=/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/8e410b52-d8de-4cae-8508-0fb58ac5241f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_66ba7d88-ae35-42fd-932a-84cc5334b587", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:15:53 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-403687319", "caps": ["mds", "allow rw path=/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/8e410b52-d8de-4cae-8508-0fb58ac5241f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_66ba7d88-ae35-42fd-932a-84cc5334b587", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:15:53 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-403687319", "caps": ["mds", "allow rw path=/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/8e410b52-d8de-4cae-8508-0fb58ac5241f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_66ba7d88-ae35-42fd-932a-84cc5334b587", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:15:53 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-403687319, format:json, prefix:fs subvolume authorize, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, tenant_id:2731ddbed05046f3bee55c8f307163b2, vol_name:cephfs) < ""
Feb 01 15:15:53 compute-0 nova_compute[238794]: 2026-02-01 15:15:53.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:15:53 compute-0 nova_compute[238794]: 2026-02-01 15:15:53.344 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:15:53 compute-0 nova_compute[238794]: 2026-02-01 15:15:53.344 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:15:53 compute-0 nova_compute[238794]: 2026-02-01 15:15:53.344 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:15:53 compute-0 nova_compute[238794]: 2026-02-01 15:15:53.345 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 01 15:15:53 compute-0 nova_compute[238794]: 2026-02-01 15:15:53.345 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:15:53 compute-0 ceph-mon[75179]: pgmap v904: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s wr, 6 op/s
Feb 01 15:15:53 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "snap_name": "09169dc3-0948-42ec-b7eb-9bb0391d7a50", "format": "json"}]: dispatch
Feb 01 15:15:53 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-403687319", "format": "json"} : dispatch
Feb 01 15:15:53 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-403687319", "caps": ["mds", "allow rw path=/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/8e410b52-d8de-4cae-8508-0fb58ac5241f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_66ba7d88-ae35-42fd-932a-84cc5334b587", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:15:53 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-403687319", "caps": ["mds", "allow rw path=/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/8e410b52-d8de-4cae-8508-0fb58ac5241f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_66ba7d88-ae35-42fd-932a-84cc5334b587", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:15:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:15:53 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4111579567' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:15:53 compute-0 nova_compute[238794]: 2026-02-01 15:15:53.882 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:15:54 compute-0 nova_compute[238794]: 2026-02-01 15:15:54.041 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 01 15:15:54 compute-0 nova_compute[238794]: 2026-02-01 15:15:54.042 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5113MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 01 15:15:54 compute-0 nova_compute[238794]: 2026-02-01 15:15:54.042 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:15:54 compute-0 nova_compute[238794]: 2026-02-01 15:15:54.042 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:15:54 compute-0 nova_compute[238794]: 2026-02-01 15:15:54.098 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 01 15:15:54 compute-0 nova_compute[238794]: 2026-02-01 15:15:54.098 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 01 15:15:54 compute-0 nova_compute[238794]: 2026-02-01 15:15:54.120 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:15:54 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s wr, 5 op/s
Feb 01 15:15:54 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "auth_id": "tempest-cephx-id-403687319", "tenant_id": "2731ddbed05046f3bee55c8f307163b2", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:15:54 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/4111579567' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:15:54 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:15:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, vol_name:cephfs) < ""
Feb 01 15:15:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb 01 15:15:54 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:15:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} v 0)
Feb 01 15:15:54 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb 01 15:15:54 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb 01 15:15:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, vol_name:cephfs) < ""
Feb 01 15:15:54 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:15:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, vol_name:cephfs) < ""
Feb 01 15:15:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1870793908, client_metadata.root=/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222/895c1713-5e9a-4aa7-9027-9cea3dd8b5ea
Feb 01 15:15:54 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=tempest-cephx-id-1870793908,client_metadata.root=/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222/895c1713-5e9a-4aa7-9027-9cea3dd8b5ea],prefix=session evict} (starting...)
Feb 01 15:15:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:15:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, vol_name:cephfs) < ""
Feb 01 15:15:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:15:54 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1113531424' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:15:54 compute-0 nova_compute[238794]: 2026-02-01 15:15:54.663 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:15:54 compute-0 nova_compute[238794]: 2026-02-01 15:15:54.670 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 01 15:15:54 compute-0 nova_compute[238794]: 2026-02-01 15:15:54.688 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 01 15:15:54 compute-0 nova_compute[238794]: 2026-02-01 15:15:54.692 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 01 15:15:54 compute-0 nova_compute[238794]: 2026-02-01 15:15:54.692 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:15:54 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "format": "json"}]: dispatch
Feb 01 15:15:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:15:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:15:54 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bcfe09f7-b95d-44d4-88ff-9ddff7f38222' of type subvolume
Feb 01 15:15:54 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:54.710+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bcfe09f7-b95d-44d4-88ff-9ddff7f38222' of type subvolume
Feb 01 15:15:54 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "force": true, "format": "json"}]: dispatch
Feb 01 15:15:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, vol_name:cephfs) < ""
Feb 01 15:15:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bcfe09f7-b95d-44d4-88ff-9ddff7f38222'' moved to trashcan
Feb 01 15:15:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:15:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bcfe09f7-b95d-44d4-88ff-9ddff7f38222, vol_name:cephfs) < ""
Feb 01 15:15:55 compute-0 ceph-mon[75179]: pgmap v905: 305 pgs: 305 active+clean; 47 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s wr, 5 op/s
Feb 01 15:15:55 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:15:55 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:15:55 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb 01 15:15:55 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb 01 15:15:55 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:15:55 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1113531424' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:15:55 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "format": "json"}]: dispatch
Feb 01 15:15:55 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bcfe09f7-b95d-44d4-88ff-9ddff7f38222", "force": true, "format": "json"}]: dispatch
Feb 01 15:15:55 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "snap_name": "09169dc3-0948-42ec-b7eb-9bb0391d7a50", "target_sub_name": "0c14589f-b0af-4342-affb-d81a226bb4b2", "format": "json"}]: dispatch
Feb 01 15:15:55 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:09169dc3-0948-42ec-b7eb-9bb0391d7a50, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, target_sub_name:0c14589f-b0af-4342-affb-d81a226bb4b2, vol_name:cephfs) < ""
Feb 01 15:15:55 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/e8f29e95-6292-426b-b4e0-b055082f1eee'.
Feb 01 15:15:55 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta.tmp'
Feb 01 15:15:55 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta.tmp' to config b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta'
Feb 01 15:15:55 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 8a526b98-dcfb-4533-ae00-f05a7d3a9b2d for path b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2'
Feb 01 15:15:55 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta.tmp'
Feb 01 15:15:55 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta.tmp' to config b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta'
Feb 01 15:15:55 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:15:55 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] initiating progress reporting for clones...
Feb 01 15:15:55 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] progress reporting for clones has been initiated
Feb 01 15:15:55 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:09169dc3-0948-42ec-b7eb-9bb0391d7a50, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, target_sub_name:0c14589f-b0af-4342-affb-d81a226bb4b2, vol_name:cephfs) < ""
Feb 01 15:15:55 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0c14589f-b0af-4342-affb-d81a226bb4b2", "format": "json"}]: dispatch
Feb 01 15:15:55 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0c14589f-b0af-4342-affb-d81a226bb4b2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:15:55 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:55.995+0000 7f826bf8b640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:55 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:55 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:55.995+0000 7f826bf8b640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:55 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:55 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:55.995+0000 7f826bf8b640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:55 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:55 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:55.995+0000 7f826bf8b640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:55 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:55 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:55.995+0000 7f826bf8b640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:55 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:55 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0c14589f-b0af-4342-affb-d81a226bb4b2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:15:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2
Feb 01 15:15:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 0c14589f-b0af-4342-affb-d81a226bb4b2)
Feb 01 15:15:56 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:56.018+0000 7f826c78c640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:56 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:56 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:56.018+0000 7f826c78c640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:56 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:56 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:56.018+0000 7f826c78c640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:56 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:56 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:56.018+0000 7f826c78c640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:56 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:56 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:56.018+0000 7f826c78c640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:56 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 0c14589f-b0af-4342-affb-d81a226bb4b2) -- by 0 seconds
Feb 01 15:15:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta.tmp'
Feb 01 15:15:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta.tmp' to config b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta'
Feb 01 15:15:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.170090) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958956170118, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 414, "num_deletes": 250, "total_data_size": 250156, "memory_usage": 258952, "flush_reason": "Manual Compaction"}
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958956173741, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 247704, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19528, "largest_seqno": 19941, "table_properties": {"data_size": 245121, "index_size": 619, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6823, "raw_average_key_size": 20, "raw_value_size": 239890, "raw_average_value_size": 709, "num_data_blocks": 26, "num_entries": 338, "num_filter_entries": 338, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769958947, "oldest_key_time": 1769958947, "file_creation_time": 1769958956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 3722 microseconds, and 1469 cpu microseconds.
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.173806) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 247704 bytes OK
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.173830) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.175517) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.175547) EVENT_LOG_v1 {"time_micros": 1769958956175539, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.175570) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 247464, prev total WAL file size 247464, number of live WAL files 2.
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.176002) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(241KB)], [44(10152KB)]
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958956176052, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 10644055, "oldest_snapshot_seqno": -1}
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4439 keys, 7303414 bytes, temperature: kUnknown
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958956217028, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7303414, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7272989, "index_size": 18219, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 110283, "raw_average_key_size": 24, "raw_value_size": 7192137, "raw_average_value_size": 1620, "num_data_blocks": 760, "num_entries": 4439, "num_filter_entries": 4439, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769958956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.217283) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7303414 bytes
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.218482) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 259.3 rd, 177.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.9 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(72.5) write-amplify(29.5) OK, records in: 4952, records dropped: 513 output_compression: NoCompression
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.218512) EVENT_LOG_v1 {"time_micros": 1769958956218499, "job": 22, "event": "compaction_finished", "compaction_time_micros": 41055, "compaction_time_cpu_micros": 23472, "output_level": 6, "num_output_files": 1, "total_output_size": 7303414, "num_input_records": 4952, "num_output_records": 4439, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958956218678, "job": 22, "event": "table_file_deletion", "file_number": 46}
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769958956220078, "job": 22, "event": "table_file_deletion", "file_number": 44}
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.175945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.220115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.220121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.220124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.220127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:15:56 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:15:56.220130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:15:56 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s wr, 9 op/s
Feb 01 15:15:56 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "snap_name": "09169dc3-0948-42ec-b7eb-9bb0391d7a50", "target_sub_name": "0c14589f-b0af-4342-affb-d81a226bb4b2", "format": "json"}]: dispatch
Feb 01 15:15:56 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0c14589f-b0af-4342-affb-d81a226bb4b2", "format": "json"}]: dispatch
Feb 01 15:15:56 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "auth_id": "Joe", "format": "json"}]: dispatch
Feb 01 15:15:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb 01 15:15:56 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:56.994+0000 7f8248c77640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:56 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:56 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:56.994+0000 7f8248c77640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:56 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:56 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:56.994+0000 7f8248c77640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:56 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:56 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:56.994+0000 7f8248c77640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:56 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:56 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:15:56.994+0000 7f8248c77640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:56 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:15:57 compute-0 ceph-mgr[75469]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'Joe' for subvolume '66ba7d88-ae35-42fd-932a-84cc5334b587'
Feb 01 15:15:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb 01 15:15:57 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "auth_id": "Joe", "format": "json"}]: dispatch
Feb 01 15:15:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb 01 15:15:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.snap/09169dc3-0948-42ec-b7eb-9bb0391d7a50/121881e8-6836-4fd0-8d00-03d9039e7468' to b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/e8f29e95-6292-426b-b4e0-b055082f1eee'
Feb 01 15:15:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/8e410b52-d8de-4cae-8508-0fb58ac5241f
Feb 01 15:15:57 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/8e410b52-d8de-4cae-8508-0fb58ac5241f],prefix=session evict} (starting...)
Feb 01 15:15:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:15:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb 01 15:15:57 compute-0 ceph-mgr[75469]: [progress INFO root] update: starting ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Feb 01 15:15:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta.tmp'
Feb 01 15:15:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta.tmp' to config b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta'
Feb 01 15:15:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.clone_index] untracking 8a526b98-dcfb-4533-ae00-f05a7d3a9b2d
Feb 01 15:15:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta.tmp'
Feb 01 15:15:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta.tmp' to config b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta'
Feb 01 15:15:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta.tmp'
Feb 01 15:15:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta.tmp' to config b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2/.meta'
Feb 01 15:15:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 0c14589f-b0af-4342-affb-d81a226bb4b2)
Feb 01 15:15:57 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0c14589f-b0af-4342-affb-d81a226bb4b2", "format": "json"}]: dispatch
Feb 01 15:15:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0c14589f-b0af-4342-affb-d81a226bb4b2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:15:57 compute-0 ceph-mon[75179]: pgmap v906: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s wr, 9 op/s
Feb 01 15:15:57 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "auth_id": "Joe", "format": "json"}]: dispatch
Feb 01 15:15:57 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "auth_id": "Joe", "format": "json"}]: dispatch
Feb 01 15:15:57 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.viosrg(active, since 25m)
Feb 01 15:15:57 compute-0 podman[246324]: 2026-02-01 15:15:57.970043906 +0000 UTC m=+0.054083223 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 01 15:15:58 compute-0 podman[246325]: 2026-02-01 15:15:58.005950706 +0000 UTC m=+0.088924763 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller)
Feb 01 15:15:58 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] removing progress bars from "ceph status" output
Feb 01 15:15:58 compute-0 ceph-mgr[75469]: [progress INFO root] complete: finished ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Feb 01 15:15:58 compute-0 ceph-mgr[75469]: [progress INFO root] Completed event mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%) in 1 seconds
Feb 01 15:15:58 compute-0 ceph-mgr[75469]: [progress WARNING root] complete: ev mgr-vol-total-clones does not exist
Feb 01 15:15:58 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] finished removing progress bars from "ceph status" output
Feb 01 15:15:58 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] marking this RTimer thread as finished; thread object ID - <volumes.fs.stats_util.CloneProgressReporter object at 0x7f82797d15e0>
Feb 01 15:15:58 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s wr, 7 op/s
Feb 01 15:15:58 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0c14589f-b0af-4342-affb-d81a226bb4b2", "format": "json"}]: dispatch
Feb 01 15:15:58 compute-0 ceph-mon[75179]: mgrmap e12: compute-0.viosrg(active, since 25m)
Feb 01 15:15:58 compute-0 ceph-mgr[75469]: [progress INFO root] Writing back 18 completed events
Feb 01 15:15:58 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 01 15:15:58 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:15:59 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.viosrg(active, since 25m)
Feb 01 15:15:59 compute-0 ceph-mon[75179]: pgmap v907: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s wr, 7 op/s
Feb 01 15:15:59 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:15:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0c14589f-b0af-4342-affb-d81a226bb4b2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:15:59 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0c14589f-b0af-4342-affb-d81a226bb4b2", "format": "json"}]: dispatch
Feb 01 15:15:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0c14589f-b0af-4342-affb-d81a226bb4b2, vol_name:cephfs) < ""
Feb 01 15:15:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0c14589f-b0af-4342-affb-d81a226bb4b2, vol_name:cephfs) < ""
Feb 01 15:15:59 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:15:59 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:15:59 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "67e50812-4602-4dc4-b942-a78b28ddb769", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:15:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, vol_name:cephfs) < ""
Feb 01 15:15:59 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769/e36c6f1f-d00d-4b20-b8ba-f9207feca0ed'.
Feb 01 15:15:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769/.meta.tmp'
Feb 01 15:15:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769/.meta.tmp' to config b'/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769/.meta'
Feb 01 15:15:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, vol_name:cephfs) < ""
Feb 01 15:15:59 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "67e50812-4602-4dc4-b942-a78b28ddb769", "format": "json"}]: dispatch
Feb 01 15:15:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, vol_name:cephfs) < ""
Feb 01 15:15:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, vol_name:cephfs) < ""
Feb 01 15:15:59 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:15:59 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:16:00 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "auth_id": "tempest-cephx-id-403687319", "format": "json"}]: dispatch
Feb 01 15:16:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-403687319, format:json, prefix:fs subvolume deauthorize, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb 01 15:16:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-403687319", "format": "json"} v 0)
Feb 01 15:16:00 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-403687319", "format": "json"} : dispatch
Feb 01 15:16:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-403687319"} v 0)
Feb 01 15:16:00 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-403687319"} : dispatch
Feb 01 15:16:00 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-403687319"}]': finished
Feb 01 15:16:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-403687319, format:json, prefix:fs subvolume deauthorize, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb 01 15:16:00 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "auth_id": "tempest-cephx-id-403687319", "format": "json"}]: dispatch
Feb 01 15:16:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-403687319, format:json, prefix:fs subvolume evict, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb 01 15:16:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-403687319, client_metadata.root=/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/8e410b52-d8de-4cae-8508-0fb58ac5241f
Feb 01 15:16:00 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=tempest-cephx-id-403687319,client_metadata.root=/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587/8e410b52-d8de-4cae-8508-0fb58ac5241f],prefix=session evict} (starting...)
Feb 01 15:16:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:16:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-403687319, format:json, prefix:fs subvolume evict, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb 01 15:16:00 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s wr, 7 op/s
Feb 01 15:16:00 compute-0 ceph-mon[75179]: mgrmap e13: compute-0.viosrg(active, since 25m)
Feb 01 15:16:00 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0c14589f-b0af-4342-affb-d81a226bb4b2", "format": "json"}]: dispatch
Feb 01 15:16:00 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:16:00 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "67e50812-4602-4dc4-b942-a78b28ddb769", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:16:00 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "67e50812-4602-4dc4-b942-a78b28ddb769", "format": "json"}]: dispatch
Feb 01 15:16:00 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:16:00 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "auth_id": "tempest-cephx-id-403687319", "format": "json"}]: dispatch
Feb 01 15:16:00 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-403687319", "format": "json"} : dispatch
Feb 01 15:16:00 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-403687319"} : dispatch
Feb 01 15:16:00 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-403687319"}]': finished
Feb 01 15:16:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:16:01 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "67e50812-4602-4dc4-b942-a78b28ddb769", "auth_id": "tempest-cephx-id-1870793908", "tenant_id": "f99925486e924480b84b05e1433af949", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:16:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb 01 15:16:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb 01 15:16:01 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:01 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID tempest-cephx-id-1870793908 with tenant f99925486e924480b84b05e1433af949
Feb 01 15:16:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769/e36c6f1f-d00d-4b20-b8ba-f9207feca0ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_67e50812-4602-4dc4-b942-a78b28ddb769", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:16:01 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769/e36c6f1f-d00d-4b20-b8ba-f9207feca0ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_67e50812-4602-4dc4-b942-a78b28ddb769", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:16:01 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769/e36c6f1f-d00d-4b20-b8ba-f9207feca0ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_67e50812-4602-4dc4-b942-a78b28ddb769", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:16:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb 01 15:16:01 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "auth_id": "tempest-cephx-id-403687319", "format": "json"}]: dispatch
Feb 01 15:16:01 compute-0 ceph-mon[75179]: pgmap v908: 305 pgs: 305 active+clean; 47 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s wr, 7 op/s
Feb 01 15:16:01 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:01 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769/e36c6f1f-d00d-4b20-b8ba-f9207feca0ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_67e50812-4602-4dc4-b942-a78b28ddb769", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:16:01 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769/e36c6f1f-d00d-4b20-b8ba-f9207feca0ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_67e50812-4602-4dc4-b942-a78b28ddb769", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:16:02 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 114 KiB/s wr, 15 op/s
Feb 01 15:16:02 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "67e50812-4602-4dc4-b942-a78b28ddb769", "auth_id": "tempest-cephx-id-1870793908", "tenant_id": "f99925486e924480b84b05e1433af949", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:16:03 compute-0 ceph-mon[75179]: pgmap v909: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 114 KiB/s wr, 15 op/s
Feb 01 15:16:03 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "auth_id": "Joe", "format": "json"}]: dispatch
Feb 01 15:16:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, vol_name:cephfs) < ""
Feb 01 15:16:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Feb 01 15:16:03 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Feb 01 15:16:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0)
Feb 01 15:16:03 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch
Feb 01 15:16:03 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Feb 01 15:16:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, vol_name:cephfs) < ""
Feb 01 15:16:03 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "auth_id": "Joe", "format": "json"}]: dispatch
Feb 01 15:16:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, vol_name:cephfs) < ""
Feb 01 15:16:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8/225e29d2-d9a9-491f-bae7-2cbc01e3d01a
Feb 01 15:16:03 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8/225e29d2-d9a9-491f-bae7-2cbc01e3d01a],prefix=session evict} (starting...)
Feb 01 15:16:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:16:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, vol_name:cephfs) < ""
Feb 01 15:16:04 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 92 KiB/s wr, 12 op/s
Feb 01 15:16:04 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "auth_id": "Joe", "format": "json"}]: dispatch
Feb 01 15:16:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Feb 01 15:16:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch
Feb 01 15:16:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Feb 01 15:16:04 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "auth_id": "Joe", "format": "json"}]: dispatch
Feb 01 15:16:04 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "67e50812-4602-4dc4-b942-a78b28ddb769", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, vol_name:cephfs) < ""
Feb 01 15:16:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb 01 15:16:04 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} v 0)
Feb 01 15:16:04 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb 01 15:16:04 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb 01 15:16:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, vol_name:cephfs) < ""
Feb 01 15:16:04 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "67e50812-4602-4dc4-b942-a78b28ddb769", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, vol_name:cephfs) < ""
Feb 01 15:16:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1870793908, client_metadata.root=/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769/e36c6f1f-d00d-4b20-b8ba-f9207feca0ed
Feb 01 15:16:04 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=tempest-cephx-id-1870793908,client_metadata.root=/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769/e36c6f1f-d00d-4b20-b8ba-f9207feca0ed],prefix=session evict} (starting...)
Feb 01 15:16:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:16:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, vol_name:cephfs) < ""
Feb 01 15:16:04 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "67e50812-4602-4dc4-b942-a78b28ddb769", "format": "json"}]: dispatch
Feb 01 15:16:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:67e50812-4602-4dc4-b942-a78b28ddb769, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:16:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:67e50812-4602-4dc4-b942-a78b28ddb769, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:16:04 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:04.886+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '67e50812-4602-4dc4-b942-a78b28ddb769' of type subvolume
Feb 01 15:16:04 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '67e50812-4602-4dc4-b942-a78b28ddb769' of type subvolume
Feb 01 15:16:04 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "67e50812-4602-4dc4-b942-a78b28ddb769", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, vol_name:cephfs) < ""
Feb 01 15:16:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/67e50812-4602-4dc4-b942-a78b28ddb769'' moved to trashcan
Feb 01 15:16:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:16:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:67e50812-4602-4dc4-b942-a78b28ddb769, vol_name:cephfs) < ""
Feb 01 15:16:05 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0c14589f-b0af-4342-affb-d81a226bb4b2", "format": "json"}]: dispatch
Feb 01 15:16:05 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0c14589f-b0af-4342-affb-d81a226bb4b2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:16:05 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0c14589f-b0af-4342-affb-d81a226bb4b2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:16:05 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0c14589f-b0af-4342-affb-d81a226bb4b2", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:05 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0c14589f-b0af-4342-affb-d81a226bb4b2, vol_name:cephfs) < ""
Feb 01 15:16:05 compute-0 ceph-mon[75179]: pgmap v910: 305 pgs: 305 active+clean; 48 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 92 KiB/s wr, 12 op/s
Feb 01 15:16:05 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "67e50812-4602-4dc4-b942-a78b28ddb769", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:05 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:05 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb 01 15:16:05 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb 01 15:16:05 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "67e50812-4602-4dc4-b942-a78b28ddb769", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:05 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "67e50812-4602-4dc4-b942-a78b28ddb769", "format": "json"}]: dispatch
Feb 01 15:16:05 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "67e50812-4602-4dc4-b942-a78b28ddb769", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:05 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0c14589f-b0af-4342-affb-d81a226bb4b2'' moved to trashcan
Feb 01 15:16:05 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:16:05 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0c14589f-b0af-4342-affb-d81a226bb4b2, vol_name:cephfs) < ""
Feb 01 15:16:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:16:06 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 48 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 142 KiB/s wr, 18 op/s
Feb 01 15:16:06 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0c14589f-b0af-4342-affb-d81a226bb4b2", "format": "json"}]: dispatch
Feb 01 15:16:06 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0c14589f-b0af-4342-affb-d81a226bb4b2", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:07 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "auth_id": "admin", "tenant_id": "e483891a9fd042d4a571a3d4655dc685", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:16:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, tenant_id:e483891a9fd042d4a571a3d4655dc685, vol_name:cephfs) < ""
Feb 01 15:16:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin", "format": "json"} v 0)
Feb 01 15:16:07 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch
Feb 01 15:16:07 compute-0 ceph-mgr[75469]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Feb 01 15:16:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, tenant_id:e483891a9fd042d4a571a3d4655dc685, vol_name:cephfs) < ""
Feb 01 15:16:07 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:07.234+0000 7f8267782640 -1 mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Feb 01 15:16:07 compute-0 ceph-mgr[75469]: mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Feb 01 15:16:07 compute-0 ceph-mon[75179]: pgmap v911: 305 pgs: 305 active+clean; 48 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 142 KiB/s wr, 18 op/s
Feb 01 15:16:07 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch
Feb 01 15:16:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:16:07.811 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:16:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:16:07.812 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:16:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:16:07.812 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:16:08 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "90ad7db4-01ea-4e02-bd1a-db4113b80713", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:16:08 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, vol_name:cephfs) < ""
Feb 01 15:16:08 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713/3ad32f37-d508-488d-a064-3cc2c5fb01ed'.
Feb 01 15:16:08 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713/.meta.tmp'
Feb 01 15:16:08 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713/.meta.tmp' to config b'/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713/.meta'
Feb 01 15:16:08 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, vol_name:cephfs) < ""
Feb 01 15:16:08 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "90ad7db4-01ea-4e02-bd1a-db4113b80713", "format": "json"}]: dispatch
Feb 01 15:16:08 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, vol_name:cephfs) < ""
Feb 01 15:16:08 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, vol_name:cephfs) < ""
Feb 01 15:16:08 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:16:08 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:16:08 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 48 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 99 KiB/s wr, 13 op/s
Feb 01 15:16:08 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "auth_id": "admin", "tenant_id": "e483891a9fd042d4a571a3d4655dc685", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:16:08 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "90ad7db4-01ea-4e02-bd1a-db4113b80713", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:16:08 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "90ad7db4-01ea-4e02-bd1a-db4113b80713", "format": "json"}]: dispatch
Feb 01 15:16:08 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:16:09 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "snap_name": "09169dc3-0948-42ec-b7eb-9bb0391d7a50_9edff701-b45a-4597-ae78-08c7150fd6a2", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:09169dc3-0948-42ec-b7eb-9bb0391d7a50_9edff701-b45a-4597-ae78-08c7150fd6a2, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb 01 15:16:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta.tmp'
Feb 01 15:16:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta.tmp' to config b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta'
Feb 01 15:16:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:09169dc3-0948-42ec-b7eb-9bb0391d7a50_9edff701-b45a-4597-ae78-08c7150fd6a2, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb 01 15:16:09 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "snap_name": "09169dc3-0948-42ec-b7eb-9bb0391d7a50", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:09169dc3-0948-42ec-b7eb-9bb0391d7a50, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb 01 15:16:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta.tmp'
Feb 01 15:16:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta.tmp' to config b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1/.meta'
Feb 01 15:16:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:09169dc3-0948-42ec-b7eb-9bb0391d7a50, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb 01 15:16:09 compute-0 ceph-mon[75179]: pgmap v912: 305 pgs: 305 active+clean; 48 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 99 KiB/s wr, 13 op/s
Feb 01 15:16:09 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "snap_name": "09169dc3-0948-42ec-b7eb-9bb0391d7a50_9edff701-b45a-4597-ae78-08c7150fd6a2", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:09 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "snap_name": "09169dc3-0948-42ec-b7eb-9bb0391d7a50", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:10 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 48 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 99 KiB/s wr, 13 op/s
Feb 01 15:16:10 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "auth_id": "david", "tenant_id": "e483891a9fd042d4a571a3d4655dc685", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:16:10 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, tenant_id:e483891a9fd042d4a571a3d4655dc685, vol_name:cephfs) < ""
Feb 01 15:16:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Feb 01 15:16:10 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Feb 01 15:16:10 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID david with tenant e483891a9fd042d4a571a3d4655dc685
Feb 01 15:16:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720/aab2a1e1-5b57-40ad-8d7a-9d89f95d2b23", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f13e6643-de3c-4836-add7-2244ceca3720", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:16:10 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720/aab2a1e1-5b57-40ad-8d7a-9d89f95d2b23", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f13e6643-de3c-4836-add7-2244ceca3720", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:16:10 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720/aab2a1e1-5b57-40ad-8d7a-9d89f95d2b23", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f13e6643-de3c-4836-add7-2244ceca3720", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:16:10 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, tenant_id:e483891a9fd042d4a571a3d4655dc685, vol_name:cephfs) < ""
Feb 01 15:16:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:16:11 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "90ad7db4-01ea-4e02-bd1a-db4113b80713", "auth_id": "tempest-cephx-id-1870793908", "tenant_id": "f99925486e924480b84b05e1433af949", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:16:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb 01 15:16:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb 01 15:16:11 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:11 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID tempest-cephx-id-1870793908 with tenant f99925486e924480b84b05e1433af949
Feb 01 15:16:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713/3ad32f37-d508-488d-a064-3cc2c5fb01ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_90ad7db4-01ea-4e02-bd1a-db4113b80713", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:16:11 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713/3ad32f37-d508-488d-a064-3cc2c5fb01ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_90ad7db4-01ea-4e02-bd1a-db4113b80713", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:16:11 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713/3ad32f37-d508-488d-a064-3cc2c5fb01ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_90ad7db4-01ea-4e02-bd1a-db4113b80713", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:16:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb 01 15:16:11 compute-0 ceph-mon[75179]: pgmap v913: 305 pgs: 305 active+clean; 48 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 99 KiB/s wr, 13 op/s
Feb 01 15:16:11 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "auth_id": "david", "tenant_id": "e483891a9fd042d4a571a3d4655dc685", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:16:11 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Feb 01 15:16:11 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720/aab2a1e1-5b57-40ad-8d7a-9d89f95d2b23", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f13e6643-de3c-4836-add7-2244ceca3720", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:16:11 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720/aab2a1e1-5b57-40ad-8d7a-9d89f95d2b23", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_f13e6643-de3c-4836-add7-2244ceca3720", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:16:11 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:11 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713/3ad32f37-d508-488d-a064-3cc2c5fb01ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_90ad7db4-01ea-4e02-bd1a-db4113b80713", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:16:11 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713/3ad32f37-d508-488d-a064-3cc2c5fb01ed", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_90ad7db4-01ea-4e02-bd1a-db4113b80713", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:16:12 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 122 KiB/s wr, 18 op/s
Feb 01 15:16:12 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "format": "json"}]: dispatch
Feb 01 15:16:12 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:16:12 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:16:12 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:12.565+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a8330130-cd80-47bb-ab6d-4bb6b88724d1' of type subvolume
Feb 01 15:16:12 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a8330130-cd80-47bb-ab6d-4bb6b88724d1' of type subvolume
Feb 01 15:16:12 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:12 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb 01 15:16:12 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a8330130-cd80-47bb-ab6d-4bb6b88724d1'' moved to trashcan
Feb 01 15:16:12 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:16:12 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a8330130-cd80-47bb-ab6d-4bb6b88724d1, vol_name:cephfs) < ""
Feb 01 15:16:12 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "90ad7db4-01ea-4e02-bd1a-db4113b80713", "auth_id": "tempest-cephx-id-1870793908", "tenant_id": "f99925486e924480b84b05e1433af949", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:16:13 compute-0 ceph-mon[75179]: pgmap v914: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 122 KiB/s wr, 18 op/s
Feb 01 15:16:13 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "format": "json"}]: dispatch
Feb 01 15:16:13 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a8330130-cd80-47bb-ab6d-4bb6b88724d1", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:14 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 74 KiB/s wr, 10 op/s
Feb 01 15:16:14 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "53873f8b-858c-4fab-a187-a58acce7cad2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:16:14 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, vol_name:cephfs) < ""
Feb 01 15:16:14 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/53873f8b-858c-4fab-a187-a58acce7cad2/52e2d3d9-e8df-4982-b844-eab1575eaea8'.
Feb 01 15:16:14 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/53873f8b-858c-4fab-a187-a58acce7cad2/.meta.tmp'
Feb 01 15:16:14 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/53873f8b-858c-4fab-a187-a58acce7cad2/.meta.tmp' to config b'/volumes/_nogroup/53873f8b-858c-4fab-a187-a58acce7cad2/.meta'
Feb 01 15:16:14 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, vol_name:cephfs) < ""
Feb 01 15:16:14 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "53873f8b-858c-4fab-a187-a58acce7cad2", "format": "json"}]: dispatch
Feb 01 15:16:14 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, vol_name:cephfs) < ""
Feb 01 15:16:14 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, vol_name:cephfs) < ""
Feb 01 15:16:14 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:16:14 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:16:14 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:16:14 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Feb 01 15:16:14 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Feb 01 15:16:14 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Feb 01 15:16:15 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "90ad7db4-01ea-4e02-bd1a-db4113b80713", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, vol_name:cephfs) < ""
Feb 01 15:16:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb 01 15:16:15 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} v 0)
Feb 01 15:16:15 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb 01 15:16:15 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb 01 15:16:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, vol_name:cephfs) < ""
Feb 01 15:16:15 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "90ad7db4-01ea-4e02-bd1a-db4113b80713", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, vol_name:cephfs) < ""
Feb 01 15:16:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1870793908, client_metadata.root=/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713/3ad32f37-d508-488d-a064-3cc2c5fb01ed
Feb 01 15:16:15 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=tempest-cephx-id-1870793908,client_metadata.root=/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713/3ad32f37-d508-488d-a064-3cc2c5fb01ed],prefix=session evict} (starting...)
Feb 01 15:16:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:16:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, vol_name:cephfs) < ""
Feb 01 15:16:15 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "90ad7db4-01ea-4e02-bd1a-db4113b80713", "format": "json"}]: dispatch
Feb 01 15:16:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:16:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:16:15 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:15.776+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '90ad7db4-01ea-4e02-bd1a-db4113b80713' of type subvolume
Feb 01 15:16:15 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '90ad7db4-01ea-4e02-bd1a-db4113b80713' of type subvolume
Feb 01 15:16:15 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "90ad7db4-01ea-4e02-bd1a-db4113b80713", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, vol_name:cephfs) < ""
Feb 01 15:16:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/90ad7db4-01ea-4e02-bd1a-db4113b80713'' moved to trashcan
Feb 01 15:16:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:16:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:90ad7db4-01ea-4e02-bd1a-db4113b80713, vol_name:cephfs) < ""
Feb 01 15:16:15 compute-0 ceph-mon[75179]: pgmap v915: 305 pgs: 305 active+clean; 49 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 74 KiB/s wr, 10 op/s
Feb 01 15:16:15 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "53873f8b-858c-4fab-a187-a58acce7cad2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:16:15 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "53873f8b-858c-4fab-a187-a58acce7cad2", "format": "json"}]: dispatch
Feb 01 15:16:15 compute-0 ceph-mon[75179]: osdmap e147: 3 total, 3 up, 3 in
Feb 01 15:16:15 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "90ad7db4-01ea-4e02-bd1a-db4113b80713", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb 01 15:16:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb 01 15:16:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:16:16 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 49 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 93 KiB/s wr, 13 op/s
Feb 01 15:16:16 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "90ad7db4-01ea-4e02-bd1a-db4113b80713", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:16 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "90ad7db4-01ea-4e02-bd1a-db4113b80713", "format": "json"}]: dispatch
Feb 01 15:16:16 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "90ad7db4-01ea-4e02-bd1a-db4113b80713", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:16:17
Feb 01 15:16:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:16:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:16:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'vms', 'default.rgw.log', 'default.rgw.meta', 'images', 'default.rgw.control', 'backups']
Feb 01 15:16:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:16:17 compute-0 ceph-mon[75179]: pgmap v917: 305 pgs: 305 active+clean; 49 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 93 KiB/s wr, 13 op/s
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "53873f8b-858c-4fab-a187-a58acce7cad2", "auth_id": "david", "tenant_id": "2731ddbed05046f3bee55c8f307163b2", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, tenant_id:2731ddbed05046f3bee55c8f307163b2, vol_name:cephfs) < ""
Feb 01 15:16:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Feb 01 15:16:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: david is already in use
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, tenant_id:2731ddbed05046f3bee55c8f307163b2, vol_name:cephfs) < ""
Feb 01 15:16:18 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:18.197+0000 7f8267782640 -1 mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 49 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 93 KiB/s wr, 13 op/s
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "tenant_id": "f99925486e924480b84b05e1433af949", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb 01 15:16:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb 01 15:16:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:18 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID tempest-cephx-id-1870793908 with tenant f99925486e924480b84b05e1433af949
Feb 01 15:16:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:16:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:16:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:16:18 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Feb 01 15:16:18 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:18 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:16:18 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:16:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:16:19 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "53873f8b-858c-4fab-a187-a58acce7cad2", "auth_id": "david", "tenant_id": "2731ddbed05046f3bee55c8f307163b2", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:16:19 compute-0 ceph-mon[75179]: pgmap v918: 305 pgs: 305 active+clean; 49 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 93 KiB/s wr, 13 op/s
Feb 01 15:16:19 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "tenant_id": "f99925486e924480b84b05e1433af949", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:16:20 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 49 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 93 KiB/s wr, 13 op/s
Feb 01 15:16:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:16:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Feb 01 15:16:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Feb 01 15:16:21 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Feb 01 15:16:21 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:16:21.283 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:64:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '76:0c:13:64:99:39'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 01 15:16:21 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:16:21.285 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 01 15:16:21 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "53873f8b-858c-4fab-a187-a58acce7cad2", "auth_id": "david", "format": "json"}]: dispatch
Feb 01 15:16:21 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, vol_name:cephfs) < ""
Feb 01 15:16:21 compute-0 ceph-mgr[75469]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'david' for subvolume '53873f8b-858c-4fab-a187-a58acce7cad2'
Feb 01 15:16:21 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, vol_name:cephfs) < ""
Feb 01 15:16:21 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "53873f8b-858c-4fab-a187-a58acce7cad2", "auth_id": "david", "format": "json"}]: dispatch
Feb 01 15:16:21 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, vol_name:cephfs) < ""
Feb 01 15:16:21 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/53873f8b-858c-4fab-a187-a58acce7cad2/52e2d3d9-e8df-4982-b844-eab1575eaea8
Feb 01 15:16:21 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/53873f8b-858c-4fab-a187-a58acce7cad2/52e2d3d9-e8df-4982-b844-eab1575eaea8],prefix=session evict} (starting...)
Feb 01 15:16:21 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:16:21 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, vol_name:cephfs) < ""
Feb 01 15:16:21 compute-0 ceph-mon[75179]: pgmap v919: 305 pgs: 305 active+clean; 49 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 93 KiB/s wr, 13 op/s
Feb 01 15:16:21 compute-0 ceph-mon[75179]: osdmap e148: 3 total, 3 up, 3 in
Feb 01 15:16:22 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 112 KiB/s wr, 14 op/s
Feb 01 15:16:22 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb 01 15:16:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb 01 15:16:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} v 0)
Feb 01 15:16:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb 01 15:16:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb 01 15:16:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb 01 15:16:22 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb 01 15:16:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1870793908, client_metadata.root=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb
Feb 01 15:16:22 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=tempest-cephx-id-1870793908,client_metadata.root=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb],prefix=session evict} (starting...)
Feb 01 15:16:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:16:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb 01 15:16:22 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "53873f8b-858c-4fab-a187-a58acce7cad2", "auth_id": "david", "format": "json"}]: dispatch
Feb 01 15:16:22 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "53873f8b-858c-4fab-a187-a58acce7cad2", "auth_id": "david", "format": "json"}]: dispatch
Feb 01 15:16:22 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:22 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb 01 15:16:22 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb 01 15:16:23 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "tenant_id": "f99925486e924480b84b05e1433af949", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:16:23 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb 01 15:16:23 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb 01 15:16:23 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:23 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID tempest-cephx-id-1870793908 with tenant f99925486e924480b84b05e1433af949
Feb 01 15:16:23 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:16:23 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:16:23 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:16:23 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb 01 15:16:23 compute-0 ceph-mon[75179]: pgmap v921: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 112 KiB/s wr, 14 op/s
Feb 01 15:16:23 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:23 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:16:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:16:24 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 648 B/s rd, 95 KiB/s wr, 12 op/s
Feb 01 15:16:24 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "tenant_id": "f99925486e924480b84b05e1433af949", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:16:25 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "auth_id": "david", "format": "json"}]: dispatch
Feb 01 15:16:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb 01 15:16:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Feb 01 15:16:25 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Feb 01 15:16:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0)
Feb 01 15:16:25 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch
Feb 01 15:16:25 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Feb 01 15:16:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb 01 15:16:25 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "auth_id": "david", "format": "json"}]: dispatch
Feb 01 15:16:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb 01 15:16:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720/aab2a1e1-5b57-40ad-8d7a-9d89f95d2b23
Feb 01 15:16:25 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720/aab2a1e1-5b57-40ad-8d7a-9d89f95d2b23],prefix=session evict} (starting...)
Feb 01 15:16:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:16:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb 01 15:16:25 compute-0 ceph-mon[75179]: pgmap v922: 305 pgs: 305 active+clean; 50 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 648 B/s rd, 95 KiB/s wr, 12 op/s
Feb 01 15:16:25 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Feb 01 15:16:25 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch
Feb 01 15:16:25 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Feb 01 15:16:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:16:26 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 50 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 80 KiB/s wr, 10 op/s
Feb 01 15:16:26 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "auth_id": "david", "format": "json"}]: dispatch
Feb 01 15:16:26 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "auth_id": "david", "format": "json"}]: dispatch
Feb 01 15:16:27 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb 01 15:16:27 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb 01 15:16:27 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:27 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} v 0)
Feb 01 15:16:27 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb 01 15:16:27 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb 01 15:16:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb 01 15:16:27 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb 01 15:16:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1870793908, client_metadata.root=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb
Feb 01 15:16:27 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=tempest-cephx-id-1870793908,client_metadata.root=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb],prefix=session evict} (starting...)
Feb 01 15:16:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:16:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb 01 15:16:27 compute-0 ceph-mon[75179]: pgmap v923: 305 pgs: 305 active+clean; 50 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 80 KiB/s wr, 10 op/s
Feb 01 15:16:27 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:27 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:27 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb 01 15:16:27 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb 01 15:16:27 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659612794123319 of space, bias 1.0, pg target 0.19978838382369957 quantized to 32 (current 32)
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00014578567209782184 of space, bias 4.0, pg target 0.17494280651738622 quantized to 16 (current 16)
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 6.994977860259165e-07 of space, bias 1.0, pg target 0.00020984933580777494 quantized to 32 (current 32)
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:16:28 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 50 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 80 KiB/s wr, 10 op/s
Feb 01 15:16:29 compute-0 podman[246379]: 2026-02-01 15:16:29.001051181 +0000 UTC m=+0.073497659 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Feb 01 15:16:29 compute-0 podman[246380]: 2026-02-01 15:16:29.029923693 +0000 UTC m=+0.099052728 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller)
Feb 01 15:16:29 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:16:29.287 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 01 15:16:29 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "53873f8b-858c-4fab-a187-a58acce7cad2", "format": "json"}]: dispatch
Feb 01 15:16:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:53873f8b-858c-4fab-a187-a58acce7cad2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:16:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:53873f8b-858c-4fab-a187-a58acce7cad2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:16:29 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:29.783+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '53873f8b-858c-4fab-a187-a58acce7cad2' of type subvolume
Feb 01 15:16:29 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '53873f8b-858c-4fab-a187-a58acce7cad2' of type subvolume
Feb 01 15:16:29 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "53873f8b-858c-4fab-a187-a58acce7cad2", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, vol_name:cephfs) < ""
Feb 01 15:16:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/53873f8b-858c-4fab-a187-a58acce7cad2'' moved to trashcan
Feb 01 15:16:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:16:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:53873f8b-858c-4fab-a187-a58acce7cad2, vol_name:cephfs) < ""
Feb 01 15:16:29 compute-0 ceph-mon[75179]: pgmap v924: 305 pgs: 305 active+clean; 50 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 80 KiB/s wr, 10 op/s
Feb 01 15:16:30 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 50 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 80 KiB/s wr, 10 op/s
Feb 01 15:16:30 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "tenant_id": "f99925486e924480b84b05e1433af949", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:16:30 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb 01 15:16:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb 01 15:16:30 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:30 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID tempest-cephx-id-1870793908 with tenant f99925486e924480b84b05e1433af949
Feb 01 15:16:30 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:16:30 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:16:30 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:16:30 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb 01 15:16:30 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "53873f8b-858c-4fab-a187-a58acce7cad2", "format": "json"}]: dispatch
Feb 01 15:16:30 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "53873f8b-858c-4fab-a187-a58acce7cad2", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:30 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:30 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:16:30 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:16:30 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0bd1c69e-9d87-420b-8cc7-eab8d429d2d0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:16:30 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0bd1c69e-9d87-420b-8cc7-eab8d429d2d0, vol_name:cephfs) < ""
Feb 01 15:16:30 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/0bd1c69e-9d87-420b-8cc7-eab8d429d2d0/e66436b8-aa27-44ad-a68b-5fc46f0da8d3'.
Feb 01 15:16:31 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0bd1c69e-9d87-420b-8cc7-eab8d429d2d0/.meta.tmp'
Feb 01 15:16:31 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0bd1c69e-9d87-420b-8cc7-eab8d429d2d0/.meta.tmp' to config b'/volumes/_nogroup/0bd1c69e-9d87-420b-8cc7-eab8d429d2d0/.meta'
Feb 01 15:16:31 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0bd1c69e-9d87-420b-8cc7-eab8d429d2d0, vol_name:cephfs) < ""
Feb 01 15:16:31 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0bd1c69e-9d87-420b-8cc7-eab8d429d2d0", "format": "json"}]: dispatch
Feb 01 15:16:31 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0bd1c69e-9d87-420b-8cc7-eab8d429d2d0, vol_name:cephfs) < ""
Feb 01 15:16:31 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0bd1c69e-9d87-420b-8cc7-eab8d429d2d0, vol_name:cephfs) < ""
Feb 01 15:16:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:16:31 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:16:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:16:31 compute-0 ceph-mon[75179]: pgmap v925: 305 pgs: 305 active+clean; 50 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 80 KiB/s wr, 10 op/s
Feb 01 15:16:31 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "tenant_id": "f99925486e924480b84b05e1433af949", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:16:31 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0bd1c69e-9d87-420b-8cc7-eab8d429d2d0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:16:31 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0bd1c69e-9d87-420b-8cc7-eab8d429d2d0", "format": "json"}]: dispatch
Feb 01 15:16:31 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:16:32 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 183 B/s rd, 90 KiB/s wr, 10 op/s
Feb 01 15:16:33 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "format": "json"}]: dispatch
Feb 01 15:16:33 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:66ba7d88-ae35-42fd-932a-84cc5334b587, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:16:33 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:66ba7d88-ae35-42fd-932a-84cc5334b587, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:16:33 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:33.345+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '66ba7d88-ae35-42fd-932a-84cc5334b587' of type subvolume
Feb 01 15:16:33 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '66ba7d88-ae35-42fd-932a-84cc5334b587' of type subvolume
Feb 01 15:16:33 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:33 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb 01 15:16:33 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/66ba7d88-ae35-42fd-932a-84cc5334b587'' moved to trashcan
Feb 01 15:16:33 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:16:33 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:66ba7d88-ae35-42fd-932a-84cc5334b587, vol_name:cephfs) < ""
Feb 01 15:16:33 compute-0 ceph-mon[75179]: pgmap v926: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 183 B/s rd, 90 KiB/s wr, 10 op/s
Feb 01 15:16:34 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 84 KiB/s wr, 10 op/s
Feb 01 15:16:34 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:34 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb 01 15:16:34 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb 01 15:16:34 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:34 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} v 0)
Feb 01 15:16:34 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb 01 15:16:34 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb 01 15:16:34 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb 01 15:16:34 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:34 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb 01 15:16:34 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1870793908, client_metadata.root=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb
Feb 01 15:16:34 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=tempest-cephx-id-1870793908,client_metadata.root=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb],prefix=session evict} (starting...)
Feb 01 15:16:34 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:16:34 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb 01 15:16:35 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "format": "json"}]: dispatch
Feb 01 15:16:35 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "66ba7d88-ae35-42fd-932a-84cc5334b587", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:35 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:35 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb 01 15:16:35 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb 01 15:16:35 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0bd1c69e-9d87-420b-8cc7-eab8d429d2d0", "format": "json"}]: dispatch
Feb 01 15:16:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0bd1c69e-9d87-420b-8cc7-eab8d429d2d0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:16:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0bd1c69e-9d87-420b-8cc7-eab8d429d2d0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:16:35 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:35.570+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0bd1c69e-9d87-420b-8cc7-eab8d429d2d0' of type subvolume
Feb 01 15:16:35 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0bd1c69e-9d87-420b-8cc7-eab8d429d2d0' of type subvolume
Feb 01 15:16:35 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0bd1c69e-9d87-420b-8cc7-eab8d429d2d0", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0bd1c69e-9d87-420b-8cc7-eab8d429d2d0, vol_name:cephfs) < ""
Feb 01 15:16:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0bd1c69e-9d87-420b-8cc7-eab8d429d2d0'' moved to trashcan
Feb 01 15:16:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:16:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0bd1c69e-9d87-420b-8cc7-eab8d429d2d0, vol_name:cephfs) < ""
Feb 01 15:16:36 compute-0 ceph-mon[75179]: pgmap v927: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 84 KiB/s wr, 10 op/s
Feb 01 15:16:36 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:36 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:16:36 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 120 KiB/s wr, 14 op/s
Feb 01 15:16:36 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "format": "json"}]: dispatch
Feb 01 15:16:36 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:16:36 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:16:36 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:36.911+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cc8298b6-cd36-4e3a-b5fa-1906378c83d8' of type subvolume
Feb 01 15:16:36 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cc8298b6-cd36-4e3a-b5fa-1906378c83d8' of type subvolume
Feb 01 15:16:36 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:36 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, vol_name:cephfs) < ""
Feb 01 15:16:36 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cc8298b6-cd36-4e3a-b5fa-1906378c83d8'' moved to trashcan
Feb 01 15:16:36 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:16:36 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cc8298b6-cd36-4e3a-b5fa-1906378c83d8, vol_name:cephfs) < ""
Feb 01 15:16:37 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0bd1c69e-9d87-420b-8cc7-eab8d429d2d0", "format": "json"}]: dispatch
Feb 01 15:16:37 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0bd1c69e-9d87-420b-8cc7-eab8d429d2d0", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:37 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "tenant_id": "f99925486e924480b84b05e1433af949", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:16:37 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb 01 15:16:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb 01 15:16:37 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:37 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID tempest-cephx-id-1870793908 with tenant f99925486e924480b84b05e1433af949
Feb 01 15:16:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:16:37 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:16:37 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:16:37 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume authorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, tenant_id:f99925486e924480b84b05e1433af949, vol_name:cephfs) < ""
Feb 01 15:16:38 compute-0 ceph-mon[75179]: pgmap v928: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 120 KiB/s wr, 14 op/s
Feb 01 15:16:38 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "format": "json"}]: dispatch
Feb 01 15:16:38 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cc8298b6-cd36-4e3a-b5fa-1906378c83d8", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:38 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:38 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:16:38 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1870793908", "caps": ["mds", "allow rw path=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:16:38 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 75 KiB/s wr, 9 op/s
Feb 01 15:16:39 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "tenant_id": "f99925486e924480b84b05e1433af949", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:16:40 compute-0 ceph-mon[75179]: pgmap v929: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 75 KiB/s wr, 9 op/s
Feb 01 15:16:40 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 75 KiB/s wr, 9 op/s
Feb 01 15:16:40 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "auth_id": "admin", "format": "json"}]: dispatch
Feb 01 15:16:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb 01 15:16:40 compute-0 ceph-mgr[75469]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin doesn't exist
Feb 01 15:16:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb 01 15:16:40 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:40.534+0000 7f8267782640 -1 mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Feb 01 15:16:40 compute-0 ceph-mgr[75469]: mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Feb 01 15:16:40 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f13e6643-de3c-4836-add7-2244ceca3720", "format": "json"}]: dispatch
Feb 01 15:16:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f13e6643-de3c-4836-add7-2244ceca3720, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:16:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f13e6643-de3c-4836-add7-2244ceca3720, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:16:40 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:40.623+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f13e6643-de3c-4836-add7-2244ceca3720' of type subvolume
Feb 01 15:16:40 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f13e6643-de3c-4836-add7-2244ceca3720' of type subvolume
Feb 01 15:16:40 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb 01 15:16:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f13e6643-de3c-4836-add7-2244ceca3720'' moved to trashcan
Feb 01 15:16:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:16:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f13e6643-de3c-4836-add7-2244ceca3720, vol_name:cephfs) < ""
Feb 01 15:16:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:16:41 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:41 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb 01 15:16:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} v 0)
Feb 01 15:16:41 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} v 0)
Feb 01 15:16:41 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb 01 15:16:41 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb 01 15:16:41 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume deauthorize, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb 01 15:16:41 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:41 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb 01 15:16:41 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1870793908, client_metadata.root=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb
Feb 01 15:16:41 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=tempest-cephx-id-1870793908,client_metadata.root=/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7/a4042ccc-8fdb-40f2-b5da-d525eebcfdcb],prefix=session evict} (starting...)
Feb 01 15:16:41 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:16:41 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1870793908, format:json, prefix:fs subvolume evict, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb 01 15:16:42 compute-0 ceph-mon[75179]: pgmap v930: 305 pgs: 305 active+clean; 51 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 75 KiB/s wr, 9 op/s
Feb 01 15:16:42 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "auth_id": "admin", "format": "json"}]: dispatch
Feb 01 15:16:42 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f13e6643-de3c-4836-add7-2244ceca3720", "format": "json"}]: dispatch
Feb 01 15:16:42 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f13e6643-de3c-4836-add7-2244ceca3720", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:42 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1870793908", "format": "json"} : dispatch
Feb 01 15:16:42 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"} : dispatch
Feb 01 15:16:42 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1870793908"}]': finished
Feb 01 15:16:42 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 111 KiB/s wr, 14 op/s
Feb 01 15:16:43 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:43 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "auth_id": "tempest-cephx-id-1870793908", "format": "json"}]: dispatch
Feb 01 15:16:43 compute-0 nova_compute[238794]: 2026-02-01 15:16:43.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:16:44 compute-0 ceph-mon[75179]: pgmap v931: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 111 KiB/s wr, 14 op/s
Feb 01 15:16:44 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 73 KiB/s wr, 9 op/s
Feb 01 15:16:46 compute-0 ceph-mon[75179]: pgmap v932: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 73 KiB/s wr, 9 op/s
Feb 01 15:16:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:16:46 compute-0 nova_compute[238794]: 2026-02-01 15:16:46.338 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:16:46 compute-0 nova_compute[238794]: 2026-02-01 15:16:46.339 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 01 15:16:46 compute-0 nova_compute[238794]: 2026-02-01 15:16:46.339 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 01 15:16:46 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 88 KiB/s wr, 12 op/s
Feb 01 15:16:46 compute-0 nova_compute[238794]: 2026-02-01 15:16:46.361 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 01 15:16:46 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "format": "json"}]: dispatch
Feb 01 15:16:46 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:16:46 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:16:46 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:16:46.601+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7' of type subvolume
Feb 01 15:16:46 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7' of type subvolume
Feb 01 15:16:46 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:46 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb 01 15:16:46 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7'' moved to trashcan
Feb 01 15:16:46 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:16:46 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7, vol_name:cephfs) < ""
Feb 01 15:16:47 compute-0 nova_compute[238794]: 2026-02-01 15:16:47.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:16:47 compute-0 nova_compute[238794]: 2026-02-01 15:16:47.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:16:47 compute-0 nova_compute[238794]: 2026-02-01 15:16:47.352 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:16:48 compute-0 ceph-mon[75179]: pgmap v933: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 88 KiB/s wr, 12 op/s
Feb 01 15:16:48 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "format": "json"}]: dispatch
Feb 01 15:16:48 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fda2fd7c-38e4-46f7-bc0a-f227e5de8aa7", "force": true, "format": "json"}]: dispatch
Feb 01 15:16:48 compute-0 sudo[246426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:16:48 compute-0 sudo[246426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:16:48 compute-0 sudo[246426]: pam_unix(sudo:session): session closed for user root
Feb 01 15:16:48 compute-0 sudo[246451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 15:16:48 compute-0 sudo[246451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:16:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 52 KiB/s wr, 7 op/s
Feb 01 15:16:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:16:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:16:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:16:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:16:48 compute-0 sudo[246451]: pam_unix(sudo:session): session closed for user root
Feb 01 15:16:48 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:16:48 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:16:48 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 15:16:48 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:16:48 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 15:16:48 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:16:48 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 15:16:48 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:16:48 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 15:16:48 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:16:48 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:16:48 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:16:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:16:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:16:48 compute-0 sudo[246507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:16:48 compute-0 sudo[246507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:16:48 compute-0 sudo[246507]: pam_unix(sudo:session): session closed for user root
Feb 01 15:16:49 compute-0 sudo[246532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 15:16:49 compute-0 sudo[246532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:16:49 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:16:49 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:16:49 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:16:49 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:16:49 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:16:49 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:16:49 compute-0 podman[246569]: 2026-02-01 15:16:49.283999584 +0000 UTC m=+0.035353855 container create 55c0a62456c1b31246349f18c4ff806b9f046c66d2dfbcff1030ae206e14eccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dijkstra, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 01 15:16:49 compute-0 systemd[1]: Started libpod-conmon-55c0a62456c1b31246349f18c4ff806b9f046c66d2dfbcff1030ae206e14eccb.scope.
Feb 01 15:16:49 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:16:49 compute-0 podman[246569]: 2026-02-01 15:16:49.357859332 +0000 UTC m=+0.109213623 container init 55c0a62456c1b31246349f18c4ff806b9f046c66d2dfbcff1030ae206e14eccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:16:49 compute-0 podman[246569]: 2026-02-01 15:16:49.266827051 +0000 UTC m=+0.018181322 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:16:49 compute-0 podman[246569]: 2026-02-01 15:16:49.364651403 +0000 UTC m=+0.116005664 container start 55c0a62456c1b31246349f18c4ff806b9f046c66d2dfbcff1030ae206e14eccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dijkstra, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 01 15:16:49 compute-0 gallant_dijkstra[246586]: 167 167
Feb 01 15:16:49 compute-0 systemd[1]: libpod-55c0a62456c1b31246349f18c4ff806b9f046c66d2dfbcff1030ae206e14eccb.scope: Deactivated successfully.
Feb 01 15:16:49 compute-0 podman[246569]: 2026-02-01 15:16:49.368374648 +0000 UTC m=+0.119728909 container attach 55c0a62456c1b31246349f18c4ff806b9f046c66d2dfbcff1030ae206e14eccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dijkstra, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 01 15:16:49 compute-0 conmon[246586]: conmon 55c0a62456c1b3124634 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-55c0a62456c1b31246349f18c4ff806b9f046c66d2dfbcff1030ae206e14eccb.scope/container/memory.events
Feb 01 15:16:49 compute-0 podman[246569]: 2026-02-01 15:16:49.37093546 +0000 UTC m=+0.122289711 container died 55c0a62456c1b31246349f18c4ff806b9f046c66d2dfbcff1030ae206e14eccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:16:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a7134e4228ea5bd78b6f33f3d4b8e3a3859840f31dcd6770ceda2acdf0541f3-merged.mount: Deactivated successfully.
Feb 01 15:16:49 compute-0 podman[246569]: 2026-02-01 15:16:49.417534761 +0000 UTC m=+0.168889042 container remove 55c0a62456c1b31246349f18c4ff806b9f046c66d2dfbcff1030ae206e14eccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dijkstra, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:16:49 compute-0 systemd[1]: libpod-conmon-55c0a62456c1b31246349f18c4ff806b9f046c66d2dfbcff1030ae206e14eccb.scope: Deactivated successfully.
Feb 01 15:16:49 compute-0 podman[246613]: 2026-02-01 15:16:49.575127584 +0000 UTC m=+0.055320987 container create a797a872f6df6ca4b7a4861cf01ba5713840278fc14671f2ea13ff1dbb8e4955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 01 15:16:49 compute-0 systemd[1]: Started libpod-conmon-a797a872f6df6ca4b7a4861cf01ba5713840278fc14671f2ea13ff1dbb8e4955.scope.
Feb 01 15:16:49 compute-0 podman[246613]: 2026-02-01 15:16:49.54724167 +0000 UTC m=+0.027435163 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:16:49 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b904d8c46b3fbbdd94b0becd8edebb6e9e7881d6171974023ba991898e58079/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b904d8c46b3fbbdd94b0becd8edebb6e9e7881d6171974023ba991898e58079/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b904d8c46b3fbbdd94b0becd8edebb6e9e7881d6171974023ba991898e58079/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b904d8c46b3fbbdd94b0becd8edebb6e9e7881d6171974023ba991898e58079/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b904d8c46b3fbbdd94b0becd8edebb6e9e7881d6171974023ba991898e58079/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 15:16:49 compute-0 podman[246613]: 2026-02-01 15:16:49.677193955 +0000 UTC m=+0.157387398 container init a797a872f6df6ca4b7a4861cf01ba5713840278fc14671f2ea13ff1dbb8e4955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:16:49 compute-0 podman[246613]: 2026-02-01 15:16:49.687248208 +0000 UTC m=+0.167441641 container start a797a872f6df6ca4b7a4861cf01ba5713840278fc14671f2ea13ff1dbb8e4955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_pike, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 01 15:16:49 compute-0 podman[246613]: 2026-02-01 15:16:49.691316763 +0000 UTC m=+0.171510316 container attach a797a872f6df6ca4b7a4861cf01ba5713840278fc14671f2ea13ff1dbb8e4955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_pike, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 01 15:16:50 compute-0 ceph-mon[75179]: pgmap v934: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 52 KiB/s wr, 7 op/s
Feb 01 15:16:50 compute-0 great_pike[246629]: --> passed data devices: 0 physical, 3 LVM
Feb 01 15:16:50 compute-0 great_pike[246629]: --> All data devices are unavailable
Feb 01 15:16:50 compute-0 systemd[1]: libpod-a797a872f6df6ca4b7a4861cf01ba5713840278fc14671f2ea13ff1dbb8e4955.scope: Deactivated successfully.
Feb 01 15:16:50 compute-0 podman[246613]: 2026-02-01 15:16:50.149898164 +0000 UTC m=+0.630091597 container died a797a872f6df6ca4b7a4861cf01ba5713840278fc14671f2ea13ff1dbb8e4955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_pike, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 01 15:16:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b904d8c46b3fbbdd94b0becd8edebb6e9e7881d6171974023ba991898e58079-merged.mount: Deactivated successfully.
Feb 01 15:16:50 compute-0 podman[246613]: 2026-02-01 15:16:50.199216021 +0000 UTC m=+0.679409414 container remove a797a872f6df6ca4b7a4861cf01ba5713840278fc14671f2ea13ff1dbb8e4955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:16:50 compute-0 systemd[1]: libpod-conmon-a797a872f6df6ca4b7a4861cf01ba5713840278fc14671f2ea13ff1dbb8e4955.scope: Deactivated successfully.
Feb 01 15:16:50 compute-0 sudo[246532]: pam_unix(sudo:session): session closed for user root
Feb 01 15:16:50 compute-0 sudo[246660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:16:50 compute-0 sudo[246660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:16:50 compute-0 sudo[246660]: pam_unix(sudo:session): session closed for user root
Feb 01 15:16:50 compute-0 nova_compute[238794]: 2026-02-01 15:16:50.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:16:50 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 52 KiB/s wr, 7 op/s
Feb 01 15:16:50 compute-0 sudo[246685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 15:16:50 compute-0 sudo[246685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:16:50 compute-0 podman[246720]: 2026-02-01 15:16:50.665249831 +0000 UTC m=+0.063023093 container create a6258acb2e66db194a8330f16e2d00e2707ffee69e69202f60338d66e838b5d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dhawan, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Feb 01 15:16:50 compute-0 systemd[1]: Started libpod-conmon-a6258acb2e66db194a8330f16e2d00e2707ffee69e69202f60338d66e838b5d2.scope.
Feb 01 15:16:50 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:16:50 compute-0 podman[246720]: 2026-02-01 15:16:50.638016225 +0000 UTC m=+0.035789557 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:16:50 compute-0 podman[246720]: 2026-02-01 15:16:50.730538858 +0000 UTC m=+0.128312110 container init a6258acb2e66db194a8330f16e2d00e2707ffee69e69202f60338d66e838b5d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 01 15:16:50 compute-0 podman[246720]: 2026-02-01 15:16:50.736032313 +0000 UTC m=+0.133805565 container start a6258acb2e66db194a8330f16e2d00e2707ffee69e69202f60338d66e838b5d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 01 15:16:50 compute-0 unruffled_dhawan[246736]: 167 167
Feb 01 15:16:50 compute-0 systemd[1]: libpod-a6258acb2e66db194a8330f16e2d00e2707ffee69e69202f60338d66e838b5d2.scope: Deactivated successfully.
Feb 01 15:16:50 compute-0 conmon[246736]: conmon a6258acb2e66db194a83 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a6258acb2e66db194a8330f16e2d00e2707ffee69e69202f60338d66e838b5d2.scope/container/memory.events
Feb 01 15:16:50 compute-0 podman[246720]: 2026-02-01 15:16:50.739776208 +0000 UTC m=+0.137549540 container attach a6258acb2e66db194a8330f16e2d00e2707ffee69e69202f60338d66e838b5d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:16:50 compute-0 podman[246720]: 2026-02-01 15:16:50.740220111 +0000 UTC m=+0.137993383 container died a6258acb2e66db194a8330f16e2d00e2707ffee69e69202f60338d66e838b5d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dhawan, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 01 15:16:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0370514c6a4f766ca1cd4a4c80148cbcc98e0e259211652655f56fdcfa6292e-merged.mount: Deactivated successfully.
Feb 01 15:16:50 compute-0 podman[246720]: 2026-02-01 15:16:50.780447352 +0000 UTC m=+0.178220594 container remove a6258acb2e66db194a8330f16e2d00e2707ffee69e69202f60338d66e838b5d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dhawan, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:16:50 compute-0 systemd[1]: libpod-conmon-a6258acb2e66db194a8330f16e2d00e2707ffee69e69202f60338d66e838b5d2.scope: Deactivated successfully.
Feb 01 15:16:50 compute-0 podman[246759]: 2026-02-01 15:16:50.922381015 +0000 UTC m=+0.042803325 container create 75960defaceda5d19ed2d68e60d043e7ebfa483ed4d0471e9a430d4ce7c67dc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_murdock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:16:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 01 15:16:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/46075784' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:16:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 01 15:16:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/46075784' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:16:50 compute-0 systemd[1]: Started libpod-conmon-75960defaceda5d19ed2d68e60d043e7ebfa483ed4d0471e9a430d4ce7c67dc5.scope.
Feb 01 15:16:50 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e37f57ac3e8a2b194bdcb86e253bb3cc193432ef9ceab82afdb3d7cd318fcd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e37f57ac3e8a2b194bdcb86e253bb3cc193432ef9ceab82afdb3d7cd318fcd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e37f57ac3e8a2b194bdcb86e253bb3cc193432ef9ceab82afdb3d7cd318fcd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e37f57ac3e8a2b194bdcb86e253bb3cc193432ef9ceab82afdb3d7cd318fcd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:16:50 compute-0 podman[246759]: 2026-02-01 15:16:50.991496709 +0000 UTC m=+0.111919029 container init 75960defaceda5d19ed2d68e60d043e7ebfa483ed4d0471e9a430d4ce7c67dc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_murdock, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:16:50 compute-0 podman[246759]: 2026-02-01 15:16:50.995680437 +0000 UTC m=+0.116102747 container start 75960defaceda5d19ed2d68e60d043e7ebfa483ed4d0471e9a430d4ce7c67dc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_murdock, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 01 15:16:50 compute-0 podman[246759]: 2026-02-01 15:16:50.998823875 +0000 UTC m=+0.119246165 container attach 75960defaceda5d19ed2d68e60d043e7ebfa483ed4d0471e9a430d4ce7c67dc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 01 15:16:50 compute-0 podman[246759]: 2026-02-01 15:16:50.904607935 +0000 UTC m=+0.025030265 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:16:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/46075784' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:16:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/46075784' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:16:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:16:51 compute-0 gifted_murdock[246776]: {
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:     "0": [
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:         {
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "devices": [
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "/dev/loop3"
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             ],
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "lv_name": "ceph_lv0",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "lv_size": "21470642176",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "name": "ceph_lv0",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "tags": {
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.cluster_name": "ceph",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.crush_device_class": "",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.encrypted": "0",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.objectstore": "bluestore",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.osd_id": "0",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.type": "block",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.vdo": "0",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.with_tpm": "0"
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             },
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "type": "block",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "vg_name": "ceph_vg0"
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:         }
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:     ],
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:     "1": [
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:         {
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "devices": [
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "/dev/loop4"
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             ],
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "lv_name": "ceph_lv1",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "lv_size": "21470642176",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "name": "ceph_lv1",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "tags": {
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.cluster_name": "ceph",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.crush_device_class": "",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.encrypted": "0",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.objectstore": "bluestore",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.osd_id": "1",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.type": "block",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.vdo": "0",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.with_tpm": "0"
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             },
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "type": "block",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "vg_name": "ceph_vg1"
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:         }
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:     ],
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:     "2": [
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:         {
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "devices": [
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "/dev/loop5"
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             ],
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "lv_name": "ceph_lv2",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "lv_size": "21470642176",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "name": "ceph_lv2",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "tags": {
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.cluster_name": "ceph",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.crush_device_class": "",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.encrypted": "0",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.objectstore": "bluestore",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.osd_id": "2",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.type": "block",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.vdo": "0",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:                 "ceph.with_tpm": "0"
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             },
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "type": "block",
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:             "vg_name": "ceph_vg2"
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:         }
Feb 01 15:16:51 compute-0 gifted_murdock[246776]:     ]
Feb 01 15:16:51 compute-0 gifted_murdock[246776]: }
Feb 01 15:16:51 compute-0 systemd[1]: libpod-75960defaceda5d19ed2d68e60d043e7ebfa483ed4d0471e9a430d4ce7c67dc5.scope: Deactivated successfully.
Feb 01 15:16:51 compute-0 podman[246759]: 2026-02-01 15:16:51.304287229 +0000 UTC m=+0.424709599 container died 75960defaceda5d19ed2d68e60d043e7ebfa483ed4d0471e9a430d4ce7c67dc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:16:51 compute-0 nova_compute[238794]: 2026-02-01 15:16:51.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:16:51 compute-0 nova_compute[238794]: 2026-02-01 15:16:51.321 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:16:51 compute-0 nova_compute[238794]: 2026-02-01 15:16:51.321 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:16:51 compute-0 nova_compute[238794]: 2026-02-01 15:16:51.321 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 01 15:16:51 compute-0 nova_compute[238794]: 2026-02-01 15:16:51.321 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:16:51 compute-0 nova_compute[238794]: 2026-02-01 15:16:51.322 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 01 15:16:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e37f57ac3e8a2b194bdcb86e253bb3cc193432ef9ceab82afdb3d7cd318fcd6-merged.mount: Deactivated successfully.
Feb 01 15:16:51 compute-0 podman[246759]: 2026-02-01 15:16:51.34982928 +0000 UTC m=+0.470251610 container remove 75960defaceda5d19ed2d68e60d043e7ebfa483ed4d0471e9a430d4ce7c67dc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_murdock, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:16:51 compute-0 systemd[1]: libpod-conmon-75960defaceda5d19ed2d68e60d043e7ebfa483ed4d0471e9a430d4ce7c67dc5.scope: Deactivated successfully.
Feb 01 15:16:51 compute-0 sudo[246685]: pam_unix(sudo:session): session closed for user root
Feb 01 15:16:51 compute-0 sudo[246797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:16:51 compute-0 sudo[246797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:16:51 compute-0 sudo[246797]: pam_unix(sudo:session): session closed for user root
Feb 01 15:16:51 compute-0 sudo[246822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 15:16:51 compute-0 sudo[246822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:16:51 compute-0 podman[246861]: 2026-02-01 15:16:51.780846635 +0000 UTC m=+0.047506997 container create 1850222ba1cbc9b0a424cc9568c3428ca07be31a373bb6f00f20ece094b183d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Feb 01 15:16:51 compute-0 systemd[1]: Started libpod-conmon-1850222ba1cbc9b0a424cc9568c3428ca07be31a373bb6f00f20ece094b183d6.scope.
Feb 01 15:16:51 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:16:51 compute-0 podman[246861]: 2026-02-01 15:16:51.840060061 +0000 UTC m=+0.106720473 container init 1850222ba1cbc9b0a424cc9568c3428ca07be31a373bb6f00f20ece094b183d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:16:51 compute-0 podman[246861]: 2026-02-01 15:16:51.846842482 +0000 UTC m=+0.113502804 container start 1850222ba1cbc9b0a424cc9568c3428ca07be31a373bb6f00f20ece094b183d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 01 15:16:51 compute-0 podman[246861]: 2026-02-01 15:16:51.75400336 +0000 UTC m=+0.020663762 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:16:51 compute-0 vigorous_goodall[246877]: 167 167
Feb 01 15:16:51 compute-0 systemd[1]: libpod-1850222ba1cbc9b0a424cc9568c3428ca07be31a373bb6f00f20ece094b183d6.scope: Deactivated successfully.
Feb 01 15:16:51 compute-0 podman[246861]: 2026-02-01 15:16:51.849912028 +0000 UTC m=+0.116572390 container attach 1850222ba1cbc9b0a424cc9568c3428ca07be31a373bb6f00f20ece094b183d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Feb 01 15:16:51 compute-0 podman[246861]: 2026-02-01 15:16:51.850250448 +0000 UTC m=+0.116910800 container died 1850222ba1cbc9b0a424cc9568c3428ca07be31a373bb6f00f20ece094b183d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:16:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b773edcfc57636807dc6ba765bfdda0af1ba8d1bbdd4e8ce098245ea3038bb1-merged.mount: Deactivated successfully.
Feb 01 15:16:51 compute-0 podman[246861]: 2026-02-01 15:16:51.889278026 +0000 UTC m=+0.155938388 container remove 1850222ba1cbc9b0a424cc9568c3428ca07be31a373bb6f00f20ece094b183d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 01 15:16:51 compute-0 systemd[1]: libpod-conmon-1850222ba1cbc9b0a424cc9568c3428ca07be31a373bb6f00f20ece094b183d6.scope: Deactivated successfully.
Feb 01 15:16:52 compute-0 podman[246900]: 2026-02-01 15:16:52.043130354 +0000 UTC m=+0.055526123 container create 069894439de661efaa426620c6f041ddd9adee49d15ad57d08844417cad67236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_antonelli, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 01 15:16:52 compute-0 systemd[1]: Started libpod-conmon-069894439de661efaa426620c6f041ddd9adee49d15ad57d08844417cad67236.scope.
Feb 01 15:16:52 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:16:52 compute-0 ceph-mon[75179]: pgmap v935: 305 pgs: 305 active+clean; 51 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 52 KiB/s wr, 7 op/s
Feb 01 15:16:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a74ed4b4a0d8eff59288915d900e59975891c437aeb6eb871eea318ec753333c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:16:52 compute-0 podman[246900]: 2026-02-01 15:16:52.0209455 +0000 UTC m=+0.033341279 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:16:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a74ed4b4a0d8eff59288915d900e59975891c437aeb6eb871eea318ec753333c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:16:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a74ed4b4a0d8eff59288915d900e59975891c437aeb6eb871eea318ec753333c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:16:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a74ed4b4a0d8eff59288915d900e59975891c437aeb6eb871eea318ec753333c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:16:52 compute-0 podman[246900]: 2026-02-01 15:16:52.143669032 +0000 UTC m=+0.156064761 container init 069894439de661efaa426620c6f041ddd9adee49d15ad57d08844417cad67236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_antonelli, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 01 15:16:52 compute-0 podman[246900]: 2026-02-01 15:16:52.151280686 +0000 UTC m=+0.163676445 container start 069894439de661efaa426620c6f041ddd9adee49d15ad57d08844417cad67236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_antonelli, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:16:52 compute-0 podman[246900]: 2026-02-01 15:16:52.154634541 +0000 UTC m=+0.167030290 container attach 069894439de661efaa426620c6f041ddd9adee49d15ad57d08844417cad67236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:16:52 compute-0 nova_compute[238794]: 2026-02-01 15:16:52.332 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:16:52 compute-0 nova_compute[238794]: 2026-02-01 15:16:52.334 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 01 15:16:52 compute-0 nova_compute[238794]: 2026-02-01 15:16:52.349 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 01 15:16:52 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 64 KiB/s wr, 10 op/s
Feb 01 15:16:52 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:16:52 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:16:52 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2'.
Feb 01 15:16:52 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/.meta.tmp'
Feb 01 15:16:52 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/.meta.tmp' to config b'/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/.meta'
Feb 01 15:16:52 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:16:52 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "format": "json"}]: dispatch
Feb 01 15:16:52 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:16:52 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:16:52 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:16:52 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:16:52 compute-0 lvm[246997]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:16:52 compute-0 lvm[246996]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:16:52 compute-0 lvm[246996]: VG ceph_vg0 finished
Feb 01 15:16:52 compute-0 lvm[246997]: VG ceph_vg1 finished
Feb 01 15:16:52 compute-0 lvm[246999]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:16:52 compute-0 lvm[246999]: VG ceph_vg2 finished
Feb 01 15:16:52 compute-0 mystifying_antonelli[246917]: {}
Feb 01 15:16:52 compute-0 systemd[1]: libpod-069894439de661efaa426620c6f041ddd9adee49d15ad57d08844417cad67236.scope: Deactivated successfully.
Feb 01 15:16:52 compute-0 systemd[1]: libpod-069894439de661efaa426620c6f041ddd9adee49d15ad57d08844417cad67236.scope: Consumed 1.050s CPU time.
Feb 01 15:16:52 compute-0 podman[247002]: 2026-02-01 15:16:52.901025327 +0000 UTC m=+0.018765879 container died 069894439de661efaa426620c6f041ddd9adee49d15ad57d08844417cad67236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:16:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a74ed4b4a0d8eff59288915d900e59975891c437aeb6eb871eea318ec753333c-merged.mount: Deactivated successfully.
Feb 01 15:16:52 compute-0 podman[247002]: 2026-02-01 15:16:52.927856612 +0000 UTC m=+0.045597164 container remove 069894439de661efaa426620c6f041ddd9adee49d15ad57d08844417cad67236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:16:52 compute-0 systemd[1]: libpod-conmon-069894439de661efaa426620c6f041ddd9adee49d15ad57d08844417cad67236.scope: Deactivated successfully.
Feb 01 15:16:52 compute-0 sudo[246822]: pam_unix(sudo:session): session closed for user root
Feb 01 15:16:52 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:16:52 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:16:52 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:16:52 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:16:53 compute-0 sudo[247018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 15:16:53 compute-0 sudo[247018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:16:53 compute-0 sudo[247018]: pam_unix(sudo:session): session closed for user root
Feb 01 15:16:53 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:16:53 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:16:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb 01 15:16:53 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:16:53 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:16:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:16:53 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:16:53 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:16:53 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:16:53 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:16:53 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:16:53 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:16:53 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:16:53 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:16:53 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:16:53 compute-0 nova_compute[238794]: 2026-02-01 15:16:53.337 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:16:54 compute-0 ceph-mon[75179]: pgmap v936: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 64 KiB/s wr, 10 op/s
Feb 01 15:16:54 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:16:54 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "format": "json"}]: dispatch
Feb 01 15:16:54 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:16:54 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 28 KiB/s wr, 5 op/s
Feb 01 15:16:55 compute-0 nova_compute[238794]: 2026-02-01 15:16:55.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:16:55 compute-0 nova_compute[238794]: 2026-02-01 15:16:55.344 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:16:55 compute-0 nova_compute[238794]: 2026-02-01 15:16:55.345 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:16:55 compute-0 nova_compute[238794]: 2026-02-01 15:16:55.345 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:16:55 compute-0 nova_compute[238794]: 2026-02-01 15:16:55.345 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 01 15:16:55 compute-0 nova_compute[238794]: 2026-02-01 15:16:55.346 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:16:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:16:55 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3890315408' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:16:55 compute-0 nova_compute[238794]: 2026-02-01 15:16:55.908 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:16:56 compute-0 nova_compute[238794]: 2026-02-01 15:16:56.052 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 01 15:16:56 compute-0 nova_compute[238794]: 2026-02-01 15:16:56.053 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5080MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 01 15:16:56 compute-0 nova_compute[238794]: 2026-02-01 15:16:56.053 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:16:56 compute-0 nova_compute[238794]: 2026-02-01 15:16:56.054 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:16:56 compute-0 ceph-mon[75179]: pgmap v937: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 28 KiB/s wr, 5 op/s
Feb 01 15:16:56 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3890315408' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:16:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:16:56 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 46 KiB/s wr, 7 op/s
Feb 01 15:16:56 compute-0 nova_compute[238794]: 2026-02-01 15:16:56.424 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 01 15:16:56 compute-0 nova_compute[238794]: 2026-02-01 15:16:56.425 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 01 15:16:56 compute-0 nova_compute[238794]: 2026-02-01 15:16:56.649 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Refreshing inventories for resource provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 01 15:16:56 compute-0 nova_compute[238794]: 2026-02-01 15:16:56.753 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Updating ProviderTree inventory for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 01 15:16:56 compute-0 nova_compute[238794]: 2026-02-01 15:16:56.753 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Updating inventory in ProviderTree for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 01 15:16:56 compute-0 nova_compute[238794]: 2026-02-01 15:16:56.775 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Refreshing aggregate associations for resource provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 01 15:16:56 compute-0 nova_compute[238794]: 2026-02-01 15:16:56.807 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Refreshing trait associations for resource provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18, traits: COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX2,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE42,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 01 15:16:56 compute-0 nova_compute[238794]: 2026-02-01 15:16:56.825 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:16:56 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:16:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:16:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb 01 15:16:56 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:16:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Feb 01 15:16:56 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 01 15:16:56 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 01 15:16:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:16:56 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:16:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:16:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:16:56 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:16:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:16:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:16:57 compute-0 ceph-mon[75179]: pgmap v938: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 46 KiB/s wr, 7 op/s
Feb 01 15:16:57 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:16:57 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:16:57 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 01 15:16:57 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 01 15:16:57 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:16:57 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:16:57 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2075517171' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:16:57 compute-0 nova_compute[238794]: 2026-02-01 15:16:57.367 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:16:57 compute-0 nova_compute[238794]: 2026-02-01 15:16:57.374 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 01 15:16:57 compute-0 nova_compute[238794]: 2026-02-01 15:16:57.395 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 01 15:16:57 compute-0 nova_compute[238794]: 2026-02-01 15:16:57.399 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 01 15:16:57 compute-0 nova_compute[238794]: 2026-02-01 15:16:57.399 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:16:57 compute-0 ceph-osd[88066]: bluestore.MempoolThread fragmentation_score=0.000140 took=0.000039s
Feb 01 15:16:57 compute-0 ceph-osd[87011]: bluestore.MempoolThread fragmentation_score=0.000033 took=0.000034s
Feb 01 15:16:57 compute-0 ceph-osd[85969]: bluestore.MempoolThread fragmentation_score=0.000137 took=0.000025s
Feb 01 15:16:58 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2075517171' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:16:58 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 31 KiB/s wr, 4 op/s
Feb 01 15:16:59 compute-0 ceph-mon[75179]: pgmap v939: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 31 KiB/s wr, 4 op/s
Feb 01 15:16:59 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "af1fdb5d-a0b1-4be1-a773-3eafab00aae8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:16:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb 01 15:16:59 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/af1fdb5d-a0b1-4be1-a773-3eafab00aae8/43092e54-1971-4f06-9465-62c98a7959e3'.
Feb 01 15:16:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af1fdb5d-a0b1-4be1-a773-3eafab00aae8/.meta.tmp'
Feb 01 15:16:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af1fdb5d-a0b1-4be1-a773-3eafab00aae8/.meta.tmp' to config b'/volumes/_nogroup/af1fdb5d-a0b1-4be1-a773-3eafab00aae8/.meta'
Feb 01 15:16:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb 01 15:16:59 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "af1fdb5d-a0b1-4be1-a773-3eafab00aae8", "format": "json"}]: dispatch
Feb 01 15:16:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb 01 15:16:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb 01 15:16:59 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:16:59 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:16:59 compute-0 podman[247088]: 2026-02-01 15:16:59.963942289 +0000 UTC m=+0.050805900 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 01 15:17:00 compute-0 podman[247089]: 2026-02-01 15:17:00.008502052 +0000 UTC m=+0.095287130 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 01 15:17:00 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "af1fdb5d-a0b1-4be1-a773-3eafab00aae8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:17:00 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "af1fdb5d-a0b1-4be1-a773-3eafab00aae8", "format": "json"}]: dispatch
Feb 01 15:17:00 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:17:00 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 31 KiB/s wr, 4 op/s
Feb 01 15:17:00 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:17:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:17:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb 01 15:17:00 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:17:00 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:17:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:17:00 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:17:00 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:17:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:17:01 compute-0 ceph-mon[75179]: pgmap v940: 305 pgs: 305 active+clean; 52 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 31 KiB/s wr, 4 op/s
Feb 01 15:17:01 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:17:01 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:17:01 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:17:01 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:17:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:17:02 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 52 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 56 KiB/s wr, 6 op/s
Feb 01 15:17:02 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af1fdb5d-a0b1-4be1-a773-3eafab00aae8", "snap_name": "f022aa77-e100-4ec5-bc9a-94f939ba4cfc", "format": "json"}]: dispatch
Feb 01 15:17:02 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f022aa77-e100-4ec5-bc9a-94f939ba4cfc, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb 01 15:17:02 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f022aa77-e100-4ec5-bc9a-94f939ba4cfc, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb 01 15:17:03 compute-0 ceph-mon[75179]: pgmap v941: 305 pgs: 305 active+clean; 52 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 56 KiB/s wr, 6 op/s
Feb 01 15:17:03 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af1fdb5d-a0b1-4be1-a773-3eafab00aae8", "snap_name": "f022aa77-e100-4ec5-bc9a-94f939ba4cfc", "format": "json"}]: dispatch
Feb 01 15:17:03 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:17:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb 01 15:17:03 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c'.
Feb 01 15:17:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/.meta.tmp'
Feb 01 15:17:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/.meta.tmp' to config b'/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/.meta'
Feb 01 15:17:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb 01 15:17:03 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "format": "json"}]: dispatch
Feb 01 15:17:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb 01 15:17:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb 01 15:17:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:17:03 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:17:04 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 52 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s wr, 4 op/s
Feb 01 15:17:04 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:17:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb 01 15:17:04 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:17:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Feb 01 15:17:04 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 01 15:17:04 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 01 15:17:04 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:17:04 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "format": "json"}]: dispatch
Feb 01 15:17:04 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:17:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:17:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 01 15:17:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 01 15:17:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:04 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:17:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:17:04 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:17:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:17:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:05 compute-0 ceph-mon[75179]: pgmap v942: 305 pgs: 305 active+clean; 52 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s wr, 4 op/s
Feb 01 15:17:05 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:17:05 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:17:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:17:06 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af1fdb5d-a0b1-4be1-a773-3eafab00aae8", "snap_name": "f022aa77-e100-4ec5-bc9a-94f939ba4cfc_05a837c1-3311-42f7-8cdb-24af5bea7bca", "force": true, "format": "json"}]: dispatch
Feb 01 15:17:06 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f022aa77-e100-4ec5-bc9a-94f939ba4cfc_05a837c1-3311-42f7-8cdb-24af5bea7bca, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb 01 15:17:06 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af1fdb5d-a0b1-4be1-a773-3eafab00aae8/.meta.tmp'
Feb 01 15:17:06 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af1fdb5d-a0b1-4be1-a773-3eafab00aae8/.meta.tmp' to config b'/volumes/_nogroup/af1fdb5d-a0b1-4be1-a773-3eafab00aae8/.meta'
Feb 01 15:17:06 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f022aa77-e100-4ec5-bc9a-94f939ba4cfc_05a837c1-3311-42f7-8cdb-24af5bea7bca, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb 01 15:17:06 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af1fdb5d-a0b1-4be1-a773-3eafab00aae8", "snap_name": "f022aa77-e100-4ec5-bc9a-94f939ba4cfc", "force": true, "format": "json"}]: dispatch
Feb 01 15:17:06 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f022aa77-e100-4ec5-bc9a-94f939ba4cfc, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb 01 15:17:06 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 82 KiB/s wr, 9 op/s
Feb 01 15:17:06 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af1fdb5d-a0b1-4be1-a773-3eafab00aae8/.meta.tmp'
Feb 01 15:17:06 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af1fdb5d-a0b1-4be1-a773-3eafab00aae8/.meta.tmp' to config b'/volumes/_nogroup/af1fdb5d-a0b1-4be1-a773-3eafab00aae8/.meta'
Feb 01 15:17:06 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f022aa77-e100-4ec5-bc9a-94f939ba4cfc, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb 01 15:17:07 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve49", "tenant_id": "557407533ddd4b83a57f3bf0896f77ac", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:17:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, tenant_id:557407533ddd4b83a57f3bf0896f77ac, vol_name:cephfs) < ""
Feb 01 15:17:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0)
Feb 01 15:17:07 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Feb 01 15:17:07 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID eve49 with tenant 557407533ddd4b83a57f3bf0896f77ac
Feb 01 15:17:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:17:07 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:17:07 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:17:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, tenant_id:557407533ddd4b83a57f3bf0896f77ac, vol_name:cephfs) < ""
Feb 01 15:17:07 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af1fdb5d-a0b1-4be1-a773-3eafab00aae8", "snap_name": "f022aa77-e100-4ec5-bc9a-94f939ba4cfc_05a837c1-3311-42f7-8cdb-24af5bea7bca", "force": true, "format": "json"}]: dispatch
Feb 01 15:17:07 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af1fdb5d-a0b1-4be1-a773-3eafab00aae8", "snap_name": "f022aa77-e100-4ec5-bc9a-94f939ba4cfc", "force": true, "format": "json"}]: dispatch
Feb 01 15:17:07 compute-0 ceph-mon[75179]: pgmap v943: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 82 KiB/s wr, 9 op/s
Feb 01 15:17:07 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve49", "tenant_id": "557407533ddd4b83a57f3bf0896f77ac", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:17:07 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Feb 01 15:17:07 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:17:07 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:17:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:17:07.812 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:17:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:17:07.813 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:17:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:17:07.813 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:17:07 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:17:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:17:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb 01 15:17:07 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:17:07 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice_bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:17:08 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:17:08 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:17:08 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:17:08 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:17:08 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 64 KiB/s wr, 7 op/s
Feb 01 15:17:08 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:17:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:17:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:17:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:17:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Feb 01 15:17:10 compute-0 ceph-mon[75179]: pgmap v944: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 64 KiB/s wr, 7 op/s
Feb 01 15:17:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Feb 01 15:17:10 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Feb 01 15:17:10 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "af1fdb5d-a0b1-4be1-a773-3eafab00aae8", "format": "json"}]: dispatch
Feb 01 15:17:10 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:17:10 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:17:10 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:10.062+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'af1fdb5d-a0b1-4be1-a773-3eafab00aae8' of type subvolume
Feb 01 15:17:10 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'af1fdb5d-a0b1-4be1-a773-3eafab00aae8' of type subvolume
Feb 01 15:17:10 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "af1fdb5d-a0b1-4be1-a773-3eafab00aae8", "force": true, "format": "json"}]: dispatch
Feb 01 15:17:10 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb 01 15:17:10 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/af1fdb5d-a0b1-4be1-a773-3eafab00aae8'' moved to trashcan
Feb 01 15:17:10 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:17:10 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:af1fdb5d-a0b1-4be1-a773-3eafab00aae8, vol_name:cephfs) < ""
Feb 01 15:17:10 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 77 KiB/s wr, 8 op/s
Feb 01 15:17:10 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve48", "tenant_id": "557407533ddd4b83a57f3bf0896f77ac", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:17:10 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, tenant_id:557407533ddd4b83a57f3bf0896f77ac, vol_name:cephfs) < ""
Feb 01 15:17:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0)
Feb 01 15:17:10 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Feb 01 15:17:10 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID eve48 with tenant 557407533ddd4b83a57f3bf0896f77ac
Feb 01 15:17:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:17:10 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:17:10 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:17:10 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, tenant_id:557407533ddd4b83a57f3bf0896f77ac, vol_name:cephfs) < ""
Feb 01 15:17:11 compute-0 ceph-mon[75179]: osdmap e149: 3 total, 3 up, 3 in
Feb 01 15:17:11 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "af1fdb5d-a0b1-4be1-a773-3eafab00aae8", "format": "json"}]: dispatch
Feb 01 15:17:11 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "af1fdb5d-a0b1-4be1-a773-3eafab00aae8", "force": true, "format": "json"}]: dispatch
Feb 01 15:17:11 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Feb 01 15:17:11 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:17:11 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:17:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:17:11 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:17:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb 01 15:17:11 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:17:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Feb 01 15:17:11 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 01 15:17:11 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 01 15:17:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:11 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:17:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:17:11 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:17:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:17:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:12 compute-0 ceph-mon[75179]: pgmap v946: 305 pgs: 305 active+clean; 53 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 77 KiB/s wr, 8 op/s
Feb 01 15:17:12 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve48", "tenant_id": "557407533ddd4b83a57f3bf0896f77ac", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:17:12 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:17:12 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 01 15:17:12 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 01 15:17:12 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 53 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 103 KiB/s wr, 12 op/s
Feb 01 15:17:13 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:17:13 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:17:14 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6dbb3e62-b996-4ace-bb16-037502f09dce", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:17:14 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6dbb3e62-b996-4ace-bb16-037502f09dce, vol_name:cephfs) < ""
Feb 01 15:17:14 compute-0 ceph-mon[75179]: pgmap v947: 305 pgs: 305 active+clean; 53 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 103 KiB/s wr, 12 op/s
Feb 01 15:17:14 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/6dbb3e62-b996-4ace-bb16-037502f09dce/2de0a33b-53fe-4bbd-9974-0c024599c273'.
Feb 01 15:17:14 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6dbb3e62-b996-4ace-bb16-037502f09dce/.meta.tmp'
Feb 01 15:17:14 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6dbb3e62-b996-4ace-bb16-037502f09dce/.meta.tmp' to config b'/volumes/_nogroup/6dbb3e62-b996-4ace-bb16-037502f09dce/.meta'
Feb 01 15:17:14 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6dbb3e62-b996-4ace-bb16-037502f09dce, vol_name:cephfs) < ""
Feb 01 15:17:14 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6dbb3e62-b996-4ace-bb16-037502f09dce", "format": "json"}]: dispatch
Feb 01 15:17:14 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6dbb3e62-b996-4ace-bb16-037502f09dce, vol_name:cephfs) < ""
Feb 01 15:17:14 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6dbb3e62-b996-4ace-bb16-037502f09dce, vol_name:cephfs) < ""
Feb 01 15:17:14 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:17:14 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:17:14 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 53 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 103 KiB/s wr, 12 op/s
Feb 01 15:17:14 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve48", "format": "json"}]: dispatch
Feb 01 15:17:14 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb 01 15:17:14 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0)
Feb 01 15:17:14 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Feb 01 15:17:14 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve48"} v 0)
Feb 01 15:17:14 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch
Feb 01 15:17:14 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Feb 01 15:17:14 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb 01 15:17:14 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve48", "format": "json"}]: dispatch
Feb 01 15:17:14 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb 01 15:17:14 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve48, client_metadata.root=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c
Feb 01 15:17:14 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=eve48,client_metadata.root=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c],prefix=session evict} (starting...)
Feb 01 15:17:14 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:17:14 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb 01 15:17:15 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6dbb3e62-b996-4ace-bb16-037502f09dce", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:17:15 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6dbb3e62-b996-4ace-bb16-037502f09dce", "format": "json"}]: dispatch
Feb 01 15:17:15 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:17:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Feb 01 15:17:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch
Feb 01 15:17:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Feb 01 15:17:16 compute-0 ceph-mon[75179]: pgmap v948: 305 pgs: 305 active+clean; 53 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 103 KiB/s wr, 12 op/s
Feb 01 15:17:16 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve48", "format": "json"}]: dispatch
Feb 01 15:17:16 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve48", "format": "json"}]: dispatch
Feb 01 15:17:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:17:16 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:17:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:17:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb 01 15:17:16 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:17:16 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice_bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:17:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:17:16 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:17:16 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:17:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:17:16 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 122 KiB/s wr, 14 op/s
Feb 01 15:17:17 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:17:17 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:17:17 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:17:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:17:17
Feb 01 15:17:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:17:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:17:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'default.rgw.meta', '.mgr', 'vms', 'images', 'cephfs.cephfs.data']
Feb 01 15:17:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6dbb3e62-b996-4ace-bb16-037502f09dce", "format": "json"}]: dispatch
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6dbb3e62-b996-4ace-bb16-037502f09dce, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6dbb3e62-b996-4ace-bb16-037502f09dce, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:17:18 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:18.050+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6dbb3e62-b996-4ace-bb16-037502f09dce' of type subvolume
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6dbb3e62-b996-4ace-bb16-037502f09dce' of type subvolume
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6dbb3e62-b996-4ace-bb16-037502f09dce", "force": true, "format": "json"}]: dispatch
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6dbb3e62-b996-4ace-bb16-037502f09dce, vol_name:cephfs) < ""
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6dbb3e62-b996-4ace-bb16-037502f09dce'' moved to trashcan
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6dbb3e62-b996-4ace-bb16-037502f09dce, vol_name:cephfs) < ""
Feb 01 15:17:18 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:17:18 compute-0 ceph-mon[75179]: pgmap v949: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 122 KiB/s wr, 14 op/s
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve47", "tenant_id": "557407533ddd4b83a57f3bf0896f77ac", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, tenant_id:557407533ddd4b83a57f3bf0896f77ac, vol_name:cephfs) < ""
Feb 01 15:17:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0)
Feb 01 15:17:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Feb 01 15:17:18 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID eve47 with tenant 557407533ddd4b83a57f3bf0896f77ac
Feb 01 15:17:18 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 01 15:17:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:17:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:17:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, tenant_id:557407533ddd4b83a57f3bf0896f77ac, vol_name:cephfs) < ""
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 122 KiB/s wr, 14 op/s
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f825b5b8370>)]
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f825be50e80>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f82797d1670>)]
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:17:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:17:19 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6dbb3e62-b996-4ace-bb16-037502f09dce", "format": "json"}]: dispatch
Feb 01 15:17:19 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6dbb3e62-b996-4ace-bb16-037502f09dce", "force": true, "format": "json"}]: dispatch
Feb 01 15:17:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Feb 01 15:17:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:17:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c9b2fd01-3509-428e-b915-0b74e783dc19", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:17:19 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.viosrg(active, since 27m)
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.367850) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959039367885, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1537, "num_deletes": 252, "total_data_size": 1967574, "memory_usage": 2000336, "flush_reason": "Manual Compaction"}
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959039382987, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1945116, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19942, "largest_seqno": 21478, "table_properties": {"data_size": 1938002, "index_size": 3932, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17568, "raw_average_key_size": 21, "raw_value_size": 1922649, "raw_average_value_size": 2305, "num_data_blocks": 177, "num_entries": 834, "num_filter_entries": 834, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769958956, "oldest_key_time": 1769958956, "file_creation_time": 1769959039, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 15213 microseconds, and 6255 cpu microseconds.
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.383058) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1945116 bytes OK
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.383081) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.389162) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.389190) EVENT_LOG_v1 {"time_micros": 1769959039389182, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.389213) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1960247, prev total WAL file size 1960247, number of live WAL files 2.
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.389804) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1899KB)], [47(7132KB)]
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959039389846, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9248530, "oldest_snapshot_seqno": -1}
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4745 keys, 7455691 bytes, temperature: kUnknown
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959039435265, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7455691, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7423284, "index_size": 19433, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11909, "raw_key_size": 118005, "raw_average_key_size": 24, "raw_value_size": 7337094, "raw_average_value_size": 1546, "num_data_blocks": 808, "num_entries": 4745, "num_filter_entries": 4745, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769959039, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.435506) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7455691 bytes
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.436933) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.2 rd, 163.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.0 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(8.6) write-amplify(3.8) OK, records in: 5273, records dropped: 528 output_compression: NoCompression
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.436978) EVENT_LOG_v1 {"time_micros": 1769959039436941, "job": 24, "event": "compaction_finished", "compaction_time_micros": 45523, "compaction_time_cpu_micros": 23731, "output_level": 6, "num_output_files": 1, "total_output_size": 7455691, "num_input_records": 5273, "num_output_records": 4745, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959039437239, "job": 24, "event": "table_file_deletion", "file_number": 49}
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959039438089, "job": 24, "event": "table_file_deletion", "file_number": 47}
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.389758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.438327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.438338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.438474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.438477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:17:19 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:17:19.438479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:17:19 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:17:19 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb 01 15:17:20 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:17:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Feb 01 15:17:20 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 01 15:17:20 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 01 15:17:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:20 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:17:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:17:20 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:17:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:17:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:20 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve47", "tenant_id": "557407533ddd4b83a57f3bf0896f77ac", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:17:20 compute-0 ceph-mon[75179]: pgmap v950: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 122 KiB/s wr, 14 op/s
Feb 01 15:17:20 compute-0 ceph-mon[75179]: mgrmap e14: compute-0.viosrg(active, since 27m)
Feb 01 15:17:20 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:17:20 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 01 15:17:20 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 01 15:17:20 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 592 B/s rd, 118 KiB/s wr, 14 op/s
Feb 01 15:17:21 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:17:21 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:17:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:17:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Feb 01 15:17:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Feb 01 15:17:21 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/8a0240ed-5f88-4931-965b-b8f7feb2baae'.
Feb 01 15:17:22 compute-0 ceph-mon[75179]: pgmap v951: 305 pgs: 305 active+clean; 54 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 592 B/s rd, 118 KiB/s wr, 14 op/s
Feb 01 15:17:22 compute-0 ceph-mon[75179]: osdmap e150: 3 total, 3 up, 3 in
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta.tmp'
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta.tmp' to config b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta'
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "format": "json"}]: dispatch
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb 01 15:17:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:17:22 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 54 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 135 KiB/s wr, 16 op/s
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve47", "format": "json"}]: dispatch
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb 01 15:17:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0)
Feb 01 15:17:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Feb 01 15:17:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve47"} v 0)
Feb 01 15:17:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch
Feb 01 15:17:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve47", "format": "json"}]: dispatch
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve47, client_metadata.root=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c
Feb 01 15:17:22 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=eve47,client_metadata.root=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c],prefix=session evict} (starting...)
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb/c9a3a2d8-1885-4fd7-9e5b-aba6a99f983b'.
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb/.meta.tmp'
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb/.meta.tmp' to config b'/volumes/_nogroup/85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb/.meta'
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb", "format": "json"}]: dispatch
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb 01 15:17:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb 01 15:17:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:17:22 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:17:23 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:17:23 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "format": "json"}]: dispatch
Feb 01 15:17:23 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:17:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Feb 01 15:17:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch
Feb 01 15:17:23 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Feb 01 15:17:23 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:17:23 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:17:23 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:17:23 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb 01 15:17:23 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:17:23 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:17:23 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:17:23 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:17:23 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:17:23 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:17:24 compute-0 ceph-mon[75179]: pgmap v953: 305 pgs: 305 active+clean; 54 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 135 KiB/s wr, 16 op/s
Feb 01 15:17:24 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve47", "format": "json"}]: dispatch
Feb 01 15:17:24 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve47", "format": "json"}]: dispatch
Feb 01 15:17:24 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:17:24 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb", "format": "json"}]: dispatch
Feb 01 15:17:24 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:17:24 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:17:24 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:17:24 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 54 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 135 KiB/s wr, 16 op/s
Feb 01 15:17:25 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:17:25 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "snap_name": "c61fb956-cb54-4a69-b984-796f123291a0", "format": "json"}]: dispatch
Feb 01 15:17:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c61fb956-cb54-4a69-b984-796f123291a0, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb 01 15:17:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c61fb956-cb54-4a69-b984-796f123291a0, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb 01 15:17:26 compute-0 ceph-mon[75179]: pgmap v954: 305 pgs: 305 active+clean; 54 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 135 KiB/s wr, 16 op/s
Feb 01 15:17:26 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "snap_name": "c61fb956-cb54-4a69-b984-796f123291a0", "format": "json"}]: dispatch
Feb 01 15:17:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:17:26 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 129 KiB/s wr, 15 op/s
Feb 01 15:17:27 compute-0 ceph-mon[75179]: pgmap v955: 305 pgs: 305 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 129 KiB/s wr, 15 op/s
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659614191380082 of space, bias 1.0, pg target 0.19978842574140246 quantized to 32 (current 32)
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00022374933924286552 of space, bias 4.0, pg target 0.2684992070914386 quantized to 16 (current 16)
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 5.087256625643029e-07 of space, bias 1.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb", "snap_name": "1e96b528-01bb-4d75-b3fa-211a85006c95", "format": "json"}]: dispatch
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1e96b528-01bb-4d75-b3fa-211a85006c95, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1e96b528-01bb-4d75-b3fa-211a85006c95, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 129 KiB/s wr, 15 op/s
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb 01 15:17:28 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:17:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Feb 01 15:17:28 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 01 15:17:28 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:17:28 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "snap_name": "c61fb956-cb54-4a69-b984-796f123291a0", "target_sub_name": "57b6c133-b657-4e29-ab3e-f40863c80360", "format": "json"}]: dispatch
Feb 01 15:17:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:c61fb956-cb54-4a69-b984-796f123291a0, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, target_sub_name:57b6c133-b657-4e29-ab3e-f40863c80360, vol_name:cephfs) < ""
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/b946c66e-6da4-4a91-b4c8-4c95fea0475d'.
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta.tmp'
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta.tmp' to config b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta'
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 04d37bf3-1c0c-4039-ac3f-39a73a48d6b5 for path b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360'
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta.tmp'
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta.tmp' to config b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta'
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] initiating progress reporting for clones...
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] progress reporting for clones has been initiated
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:c61fb956-cb54-4a69-b984-796f123291a0, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, target_sub_name:57b6c133-b657-4e29-ab3e-f40863c80360, vol_name:cephfs) < ""
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57b6c133-b657-4e29-ab3e-f40863c80360", "format": "json"}]: dispatch
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:57b6c133-b657-4e29-ab3e-f40863c80360, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:17:29 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:29.055+0000 7f826cf8d640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:17:29 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:29.056+0000 7f826cf8d640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:17:29 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:29.056+0000 7f826cf8d640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:17:29 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:29.056+0000 7f826cf8d640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:17:29 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:29.056+0000 7f826cf8d640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:57b6c133-b657-4e29-ab3e-f40863c80360, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 57b6c133-b657-4e29-ab3e-f40863c80360)
Feb 01 15:17:29 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:29.071+0000 7f826c78c640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:17:29 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:29.071+0000 7f826c78c640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:17:29 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:29.071+0000 7f826c78c640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:17:29 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:29.071+0000 7f826c78c640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:17:29 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:29.071+0000 7f826c78c640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 57b6c133-b657-4e29-ab3e-f40863c80360) -- by 0 seconds
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta.tmp'
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta.tmp' to config b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta'
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve49", "format": "json"}]: dispatch
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb 01 15:17:29 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb", "snap_name": "1e96b528-01bb-4d75-b3fa-211a85006c95", "format": "json"}]: dispatch
Feb 01 15:17:29 compute-0 ceph-mon[75179]: pgmap v956: 305 pgs: 305 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 129 KiB/s wr, 15 op/s
Feb 01 15:17:29 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:17:29 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:17:29 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 01 15:17:29 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 01 15:17:29 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:17:29 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "snap_name": "c61fb956-cb54-4a69-b984-796f123291a0", "target_sub_name": "57b6c133-b657-4e29-ab3e-f40863c80360", "format": "json"}]: dispatch
Feb 01 15:17:29 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57b6c133-b657-4e29-ab3e-f40863c80360", "format": "json"}]: dispatch
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.snap/c61fb956-cb54-4a69-b984-796f123291a0/8a0240ed-5f88-4931-965b-b8f7feb2baae' to b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/b946c66e-6da4-4a91-b4c8-4c95fea0475d'
Feb 01 15:17:29 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0)
Feb 01 15:17:29 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Feb 01 15:17:29 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve49"} v 0)
Feb 01 15:17:29 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch
Feb 01 15:17:29 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve49", "format": "json"}]: dispatch
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta.tmp'
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta.tmp' to config b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta'
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.clone_index] untracking 04d37bf3-1c0c-4039-ac3f-39a73a48d6b5
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta.tmp'
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta.tmp' to config b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta'
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta.tmp'
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta.tmp' to config b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360/.meta'
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 57b6c133-b657-4e29-ab3e-f40863c80360)
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve49, client_metadata.root=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c
Feb 01 15:17:29 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=eve49,client_metadata.root=/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19/08d80c53-80dc-4183-9c16-84cb7c2a762c],prefix=session evict} (starting...)
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:17:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb 01 15:17:30 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "format": "json"}]: dispatch
Feb 01 15:17:30 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c9b2fd01-3509-428e-b915-0b74e783dc19, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:17:30 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c9b2fd01-3509-428e-b915-0b74e783dc19, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:17:30 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:30.028+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c9b2fd01-3509-428e-b915-0b74e783dc19' of type subvolume
Feb 01 15:17:30 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c9b2fd01-3509-428e-b915-0b74e783dc19' of type subvolume
Feb 01 15:17:30 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "force": true, "format": "json"}]: dispatch
Feb 01 15:17:30 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb 01 15:17:30 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c9b2fd01-3509-428e-b915-0b74e783dc19'' moved to trashcan
Feb 01 15:17:30 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:17:30 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c9b2fd01-3509-428e-b915-0b74e783dc19, vol_name:cephfs) < ""
Feb 01 15:17:30 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] removing progress bars from "ceph status" output
Feb 01 15:17:30 compute-0 ceph-mgr[75469]: [progress WARNING root] complete: ev mgr-vol-ongoing-clones does not exist
Feb 01 15:17:30 compute-0 ceph-mgr[75469]: [progress WARNING root] complete: ev mgr-vol-total-clones does not exist
Feb 01 15:17:30 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] finished removing progress bars from "ceph status" output
Feb 01 15:17:30 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] marking this RTimer thread as finished; thread object ID - <volumes.fs.stats_util.CloneProgressReporter object at 0x7f82797d15e0>
Feb 01 15:17:30 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 129 KiB/s wr, 15 op/s
Feb 01 15:17:30 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve49", "format": "json"}]: dispatch
Feb 01 15:17:30 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Feb 01 15:17:30 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch
Feb 01 15:17:30 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Feb 01 15:17:30 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "auth_id": "eve49", "format": "json"}]: dispatch
Feb 01 15:17:30 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "format": "json"}]: dispatch
Feb 01 15:17:30 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c9b2fd01-3509-428e-b915-0b74e783dc19", "force": true, "format": "json"}]: dispatch
Feb 01 15:17:31 compute-0 podman[247165]: 2026-02-01 15:17:31.008797103 +0000 UTC m=+0.092010340 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 01 15:17:31 compute-0 podman[247166]: 2026-02-01 15:17:31.017034594 +0000 UTC m=+0.102609867 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 01 15:17:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:17:31 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.viosrg(active, since 27m)
Feb 01 15:17:31 compute-0 ceph-mon[75179]: pgmap v957: 305 pgs: 305 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 129 KiB/s wr, 15 op/s
Feb 01 15:17:31 compute-0 ceph-mon[75179]: mgrmap e15: compute-0.viosrg(active, since 27m)
Feb 01 15:17:31 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:17:31 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:17:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb 01 15:17:31 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:17:31 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:17:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:17:31 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:17:31 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:17:31 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:17:32 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 56 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 547 B/s rd, 118 KiB/s wr, 13 op/s
Feb 01 15:17:32 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:17:32 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:17:32 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:17:32 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:17:33 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:17:33.480 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:64:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '76:0c:13:64:99:39'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 01 15:17:33 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:17:33.482 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 01 15:17:33 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8c24d660-d99e-4a84-8d8a-dd162ef7a432", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:17:33 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8c24d660-d99e-4a84-8d8a-dd162ef7a432, vol_name:cephfs) < ""
Feb 01 15:17:33 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/8c24d660-d99e-4a84-8d8a-dd162ef7a432/78726976-b5a8-431b-96ab-e953f68fd3ff'.
Feb 01 15:17:33 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8c24d660-d99e-4a84-8d8a-dd162ef7a432/.meta.tmp'
Feb 01 15:17:33 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8c24d660-d99e-4a84-8d8a-dd162ef7a432/.meta.tmp' to config b'/volumes/_nogroup/8c24d660-d99e-4a84-8d8a-dd162ef7a432/.meta'
Feb 01 15:17:33 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8c24d660-d99e-4a84-8d8a-dd162ef7a432, vol_name:cephfs) < ""
Feb 01 15:17:33 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8c24d660-d99e-4a84-8d8a-dd162ef7a432", "format": "json"}]: dispatch
Feb 01 15:17:33 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8c24d660-d99e-4a84-8d8a-dd162ef7a432, vol_name:cephfs) < ""
Feb 01 15:17:33 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8c24d660-d99e-4a84-8d8a-dd162ef7a432, vol_name:cephfs) < ""
Feb 01 15:17:33 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:17:33 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:17:33 compute-0 ceph-mon[75179]: pgmap v958: 305 pgs: 305 active+clean; 56 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 547 B/s rd, 118 KiB/s wr, 13 op/s
Feb 01 15:17:33 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:17:34 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 56 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 111 KiB/s wr, 12 op/s
Feb 01 15:17:34 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8c24d660-d99e-4a84-8d8a-dd162ef7a432", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:17:34 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8c24d660-d99e-4a84-8d8a-dd162ef7a432", "format": "json"}]: dispatch
Feb 01 15:17:35 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:17:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb 01 15:17:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:17:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Feb 01 15:17:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 01 15:17:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 01 15:17:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:35 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:17:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:17:35 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:17:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:17:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:35 compute-0 ceph-mon[75179]: pgmap v959: 305 pgs: 305 active+clean; 56 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 111 KiB/s wr, 12 op/s
Feb 01 15:17:35 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:17:35 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 01 15:17:35 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 01 15:17:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:17:36 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 152 KiB/s wr, 18 op/s
Feb 01 15:17:36 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:17:36.485 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 01 15:17:36 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:17:36 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:17:37 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8c24d660-d99e-4a84-8d8a-dd162ef7a432", "format": "json"}]: dispatch
Feb 01 15:17:37 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8c24d660-d99e-4a84-8d8a-dd162ef7a432, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:17:37 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8c24d660-d99e-4a84-8d8a-dd162ef7a432, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:17:37 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:37.239+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8c24d660-d99e-4a84-8d8a-dd162ef7a432' of type subvolume
Feb 01 15:17:37 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8c24d660-d99e-4a84-8d8a-dd162ef7a432' of type subvolume
Feb 01 15:17:37 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8c24d660-d99e-4a84-8d8a-dd162ef7a432", "force": true, "format": "json"}]: dispatch
Feb 01 15:17:37 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8c24d660-d99e-4a84-8d8a-dd162ef7a432, vol_name:cephfs) < ""
Feb 01 15:17:37 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8c24d660-d99e-4a84-8d8a-dd162ef7a432'' moved to trashcan
Feb 01 15:17:37 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:17:37 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8c24d660-d99e-4a84-8d8a-dd162ef7a432, vol_name:cephfs) < ""
Feb 01 15:17:37 compute-0 ceph-mon[75179]: pgmap v960: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 152 KiB/s wr, 18 op/s
Feb 01 15:17:38 compute-0 nova_compute[238794]: 2026-02-01 15:17:38.108 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:17:38 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 102 KiB/s wr, 12 op/s
Feb 01 15:17:38 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:17:38 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:17:38 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8c24d660-d99e-4a84-8d8a-dd162ef7a432", "format": "json"}]: dispatch
Feb 01 15:17:38 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8c24d660-d99e-4a84-8d8a-dd162ef7a432", "force": true, "format": "json"}]: dispatch
Feb 01 15:17:38 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb 01 15:17:38 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:17:38 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:17:39 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:17:39 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:17:39 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:17:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:17:39 compute-0 ceph-mon[75179]: pgmap v961: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 102 KiB/s wr, 12 op/s
Feb 01 15:17:39 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:17:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:17:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:17:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:17:40 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 102 KiB/s wr, 12 op/s
Feb 01 15:17:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:17:41 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "86350af1-da40-441c-befe-cde1cbd30541", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:17:41 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:86350af1-da40-441c-befe-cde1cbd30541, vol_name:cephfs) < ""
Feb 01 15:17:41 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/86350af1-da40-441c-befe-cde1cbd30541/d292e24c-a6d4-450e-a222-6c2b805383e3'.
Feb 01 15:17:41 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/86350af1-da40-441c-befe-cde1cbd30541/.meta.tmp'
Feb 01 15:17:41 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/86350af1-da40-441c-befe-cde1cbd30541/.meta.tmp' to config b'/volumes/_nogroup/86350af1-da40-441c-befe-cde1cbd30541/.meta'
Feb 01 15:17:41 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:86350af1-da40-441c-befe-cde1cbd30541, vol_name:cephfs) < ""
Feb 01 15:17:41 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "86350af1-da40-441c-befe-cde1cbd30541", "format": "json"}]: dispatch
Feb 01 15:17:41 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:86350af1-da40-441c-befe-cde1cbd30541, vol_name:cephfs) < ""
Feb 01 15:17:41 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:86350af1-da40-441c-befe-cde1cbd30541, vol_name:cephfs) < ""
Feb 01 15:17:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:17:41 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:17:41 compute-0 ceph-mon[75179]: pgmap v962: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 102 KiB/s wr, 12 op/s
Feb 01 15:17:41 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:17:42 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 139 KiB/s wr, 16 op/s
Feb 01 15:17:42 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:17:42 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:42 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb 01 15:17:42 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:17:42 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Feb 01 15:17:42 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 01 15:17:42 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 01 15:17:42 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:42 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:17:42 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:42 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:17:42 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:17:42 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:17:42 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:42 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "86350af1-da40-441c-befe-cde1cbd30541", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:17:42 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "86350af1-da40-441c-befe-cde1cbd30541", "format": "json"}]: dispatch
Feb 01 15:17:42 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:17:42 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 01 15:17:42 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 01 15:17:44 compute-0 ceph-mon[75179]: pgmap v963: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 139 KiB/s wr, 16 op/s
Feb 01 15:17:44 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:17:44 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:17:44 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 78 KiB/s wr, 9 op/s
Feb 01 15:17:45 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "86350af1-da40-441c-befe-cde1cbd30541", "format": "json"}]: dispatch
Feb 01 15:17:45 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:86350af1-da40-441c-befe-cde1cbd30541, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:17:45 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:86350af1-da40-441c-befe-cde1cbd30541, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:17:45 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:45.827+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '86350af1-da40-441c-befe-cde1cbd30541' of type subvolume
Feb 01 15:17:45 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '86350af1-da40-441c-befe-cde1cbd30541' of type subvolume
Feb 01 15:17:45 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "86350af1-da40-441c-befe-cde1cbd30541", "force": true, "format": "json"}]: dispatch
Feb 01 15:17:45 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:86350af1-da40-441c-befe-cde1cbd30541, vol_name:cephfs) < ""
Feb 01 15:17:45 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/86350af1-da40-441c-befe-cde1cbd30541'' moved to trashcan
Feb 01 15:17:45 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:17:45 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:86350af1-da40-441c-befe-cde1cbd30541, vol_name:cephfs) < ""
Feb 01 15:17:46 compute-0 ceph-mon[75179]: pgmap v964: 305 pgs: 305 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 78 KiB/s wr, 9 op/s
Feb 01 15:17:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:17:46 compute-0 nova_compute[238794]: 2026-02-01 15:17:46.340 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:17:46 compute-0 nova_compute[238794]: 2026-02-01 15:17:46.340 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 01 15:17:46 compute-0 nova_compute[238794]: 2026-02-01 15:17:46.341 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 01 15:17:46 compute-0 nova_compute[238794]: 2026-02-01 15:17:46.356 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 01 15:17:46 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 57 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 104 KiB/s wr, 12 op/s
Feb 01 15:17:46 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:17:46 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:17:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb 01 15:17:46 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:17:46 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:17:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:17:46 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:17:46 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:17:46 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:17:47 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "86350af1-da40-441c-befe-cde1cbd30541", "format": "json"}]: dispatch
Feb 01 15:17:47 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "86350af1-da40-441c-befe-cde1cbd30541", "force": true, "format": "json"}]: dispatch
Feb 01 15:17:47 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:17:47 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:17:47 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:17:47 compute-0 nova_compute[238794]: 2026-02-01 15:17:47.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:17:48 compute-0 ceph-mon[75179]: pgmap v965: 305 pgs: 305 active+clean; 57 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 104 KiB/s wr, 12 op/s
Feb 01 15:17:48 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:17:48 compute-0 nova_compute[238794]: 2026-02-01 15:17:48.315 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:17:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 57 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 63 KiB/s wr, 6 op/s
Feb 01 15:17:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:17:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:17:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:17:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:17:48 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ff89896c-730f-4d0f-b5d3-5b63ed6c492d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:17:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb 01 15:17:48 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/ff89896c-730f-4d0f-b5d3-5b63ed6c492d/861fb7cb-7d04-4083-bc0f-ab5d8a2821b0'.
Feb 01 15:17:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ff89896c-730f-4d0f-b5d3-5b63ed6c492d/.meta.tmp'
Feb 01 15:17:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ff89896c-730f-4d0f-b5d3-5b63ed6c492d/.meta.tmp' to config b'/volumes/_nogroup/ff89896c-730f-4d0f-b5d3-5b63ed6c492d/.meta'
Feb 01 15:17:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb 01 15:17:48 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ff89896c-730f-4d0f-b5d3-5b63ed6c492d", "format": "json"}]: dispatch
Feb 01 15:17:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb 01 15:17:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb 01 15:17:48 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:17:48 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:17:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:17:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:17:49 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:17:49 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "887d0676-527e-47b5-bf80-254c50cf4633", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:17:49 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:887d0676-527e-47b5-bf80-254c50cf4633, vol_name:cephfs) < ""
Feb 01 15:17:49 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/887d0676-527e-47b5-bf80-254c50cf4633/383e1c99-f6dd-41d8-9eef-e85139cf1415'.
Feb 01 15:17:49 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/887d0676-527e-47b5-bf80-254c50cf4633/.meta.tmp'
Feb 01 15:17:49 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/887d0676-527e-47b5-bf80-254c50cf4633/.meta.tmp' to config b'/volumes/_nogroup/887d0676-527e-47b5-bf80-254c50cf4633/.meta'
Feb 01 15:17:49 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:887d0676-527e-47b5-bf80-254c50cf4633, vol_name:cephfs) < ""
Feb 01 15:17:49 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "887d0676-527e-47b5-bf80-254c50cf4633", "format": "json"}]: dispatch
Feb 01 15:17:49 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:887d0676-527e-47b5-bf80-254c50cf4633, vol_name:cephfs) < ""
Feb 01 15:17:49 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:887d0676-527e-47b5-bf80-254c50cf4633, vol_name:cephfs) < ""
Feb 01 15:17:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:17:49 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:17:49 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:17:49 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb 01 15:17:49 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:17:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Feb 01 15:17:49 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 01 15:17:49 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 01 15:17:50 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:50 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:17:50 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:50 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:17:50 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:17:50 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:17:50 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:50 compute-0 ceph-mon[75179]: pgmap v966: 305 pgs: 305 active+clean; 57 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 63 KiB/s wr, 6 op/s
Feb 01 15:17:50 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ff89896c-730f-4d0f-b5d3-5b63ed6c492d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:17:50 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ff89896c-730f-4d0f-b5d3-5b63ed6c492d", "format": "json"}]: dispatch
Feb 01 15:17:50 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:17:50 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:17:50 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 01 15:17:50 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 01 15:17:50 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 57 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 63 KiB/s wr, 6 op/s
Feb 01 15:17:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 01 15:17:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2079590638' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:17:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 01 15:17:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2079590638' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:17:51 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "887d0676-527e-47b5-bf80-254c50cf4633", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:17:51 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "887d0676-527e-47b5-bf80-254c50cf4633", "format": "json"}]: dispatch
Feb 01 15:17:51 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:17:51 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:17:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/2079590638' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:17:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/2079590638' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:17:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:17:51 compute-0 nova_compute[238794]: 2026-02-01 15:17:51.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:17:51 compute-0 nova_compute[238794]: 2026-02-01 15:17:51.319 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 01 15:17:51 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ff89896c-730f-4d0f-b5d3-5b63ed6c492d", "snap_name": "337552e6-dd85-4f6d-9610-99737469dd80", "format": "json"}]: dispatch
Feb 01 15:17:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:337552e6-dd85-4f6d-9610-99737469dd80, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb 01 15:17:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:337552e6-dd85-4f6d-9610-99737469dd80, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb 01 15:17:52 compute-0 ceph-mon[75179]: pgmap v967: 305 pgs: 305 active+clean; 57 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 63 KiB/s wr, 6 op/s
Feb 01 15:17:52 compute-0 nova_compute[238794]: 2026-02-01 15:17:52.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:17:52 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 122 KiB/s wr, 12 op/s
Feb 01 15:17:52 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "887d0676-527e-47b5-bf80-254c50cf4633", "format": "json"}]: dispatch
Feb 01 15:17:52 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:887d0676-527e-47b5-bf80-254c50cf4633, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:17:52 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:887d0676-527e-47b5-bf80-254c50cf4633, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:17:52 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:52.749+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '887d0676-527e-47b5-bf80-254c50cf4633' of type subvolume
Feb 01 15:17:52 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '887d0676-527e-47b5-bf80-254c50cf4633' of type subvolume
Feb 01 15:17:52 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "887d0676-527e-47b5-bf80-254c50cf4633", "force": true, "format": "json"}]: dispatch
Feb 01 15:17:52 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:887d0676-527e-47b5-bf80-254c50cf4633, vol_name:cephfs) < ""
Feb 01 15:17:52 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/887d0676-527e-47b5-bf80-254c50cf4633'' moved to trashcan
Feb 01 15:17:52 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:17:52 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:887d0676-527e-47b5-bf80-254c50cf4633, vol_name:cephfs) < ""
Feb 01 15:17:53 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ff89896c-730f-4d0f-b5d3-5b63ed6c492d", "snap_name": "337552e6-dd85-4f6d-9610-99737469dd80", "format": "json"}]: dispatch
Feb 01 15:17:53 compute-0 sudo[247214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:17:53 compute-0 sudo[247214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:17:53 compute-0 sudo[247214]: pam_unix(sudo:session): session closed for user root
Feb 01 15:17:53 compute-0 sudo[247239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 15:17:53 compute-0 sudo[247239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:17:53 compute-0 nova_compute[238794]: 2026-02-01 15:17:53.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:17:53 compute-0 nova_compute[238794]: 2026-02-01 15:17:53.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:17:53 compute-0 nova_compute[238794]: 2026-02-01 15:17:53.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:17:53 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:17:53 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:17:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb 01 15:17:53 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:17:53 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice_bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:17:53 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 01 15:17:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:17:53 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:17:53 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:17:53 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:17:53 compute-0 sudo[247239]: pam_unix(sudo:session): session closed for user root
Feb 01 15:17:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:17:53 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:17:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 15:17:53 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:17:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 15:17:53 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:17:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 15:17:53 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:17:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 15:17:53 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:17:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:17:53 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:17:53 compute-0 sudo[247294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:17:53 compute-0 sudo[247294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:17:53 compute-0 sudo[247294]: pam_unix(sudo:session): session closed for user root
Feb 01 15:17:53 compute-0 sudo[247319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 15:17:53 compute-0 sudo[247319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:17:54 compute-0 podman[247356]: 2026-02-01 15:17:54.075878168 +0000 UTC m=+0.054621574 container create ab43e37ff359fd31229271d4fed37cc13d8334f0b4cc4813488101b6a4680e61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 01 15:17:54 compute-0 ceph-mon[75179]: pgmap v968: 305 pgs: 305 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 122 KiB/s wr, 12 op/s
Feb 01 15:17:54 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "887d0676-527e-47b5-bf80-254c50cf4633", "format": "json"}]: dispatch
Feb 01 15:17:54 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "887d0676-527e-47b5-bf80-254c50cf4633", "force": true, "format": "json"}]: dispatch
Feb 01 15:17:54 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:17:54 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:17:54 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:17:54 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:17:54 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:17:54 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:17:54 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:17:54 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:17:54 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:17:54 compute-0 systemd[1]: Started libpod-conmon-ab43e37ff359fd31229271d4fed37cc13d8334f0b4cc4813488101b6a4680e61.scope.
Feb 01 15:17:54 compute-0 podman[247356]: 2026-02-01 15:17:54.054365338 +0000 UTC m=+0.033108784 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:17:54 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:17:54 compute-0 podman[247356]: 2026-02-01 15:17:54.169216052 +0000 UTC m=+0.147959498 container init ab43e37ff359fd31229271d4fed37cc13d8334f0b4cc4813488101b6a4680e61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_napier, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:17:54 compute-0 podman[247356]: 2026-02-01 15:17:54.177251606 +0000 UTC m=+0.155995002 container start ab43e37ff359fd31229271d4fed37cc13d8334f0b4cc4813488101b6a4680e61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_napier, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:17:54 compute-0 podman[247356]: 2026-02-01 15:17:54.181330389 +0000 UTC m=+0.160073795 container attach ab43e37ff359fd31229271d4fed37cc13d8334f0b4cc4813488101b6a4680e61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 01 15:17:54 compute-0 crazy_napier[247372]: 167 167
Feb 01 15:17:54 compute-0 systemd[1]: libpod-ab43e37ff359fd31229271d4fed37cc13d8334f0b4cc4813488101b6a4680e61.scope: Deactivated successfully.
Feb 01 15:17:54 compute-0 podman[247356]: 2026-02-01 15:17:54.184913449 +0000 UTC m=+0.163656845 container died ab43e37ff359fd31229271d4fed37cc13d8334f0b4cc4813488101b6a4680e61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 01 15:17:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c502389dab7046a76e818727c0b0124ec511c79bacf3f14b7a3bf9a4b264a4e9-merged.mount: Deactivated successfully.
Feb 01 15:17:54 compute-0 podman[247356]: 2026-02-01 15:17:54.235796258 +0000 UTC m=+0.214539664 container remove ab43e37ff359fd31229271d4fed37cc13d8334f0b4cc4813488101b6a4680e61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_napier, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:17:54 compute-0 systemd[1]: libpod-conmon-ab43e37ff359fd31229271d4fed37cc13d8334f0b4cc4813488101b6a4680e61.scope: Deactivated successfully.
Feb 01 15:17:54 compute-0 podman[247396]: 2026-02-01 15:17:54.406482969 +0000 UTC m=+0.054108930 container create cca3687477fbdcced2bba0a1cfec6199aaee6b997c326486ba76862bef922ab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:17:54 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 KiB/s wr, 9 op/s
Feb 01 15:17:54 compute-0 systemd[1]: Started libpod-conmon-cca3687477fbdcced2bba0a1cfec6199aaee6b997c326486ba76862bef922ab5.scope.
Feb 01 15:17:54 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:17:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70fc77f78663fef64f40cfa5b11c49e85713280bcd96a4391bd7f384c8469d51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:17:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70fc77f78663fef64f40cfa5b11c49e85713280bcd96a4391bd7f384c8469d51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:17:54 compute-0 podman[247396]: 2026-02-01 15:17:54.381366758 +0000 UTC m=+0.028992809 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:17:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70fc77f78663fef64f40cfa5b11c49e85713280bcd96a4391bd7f384c8469d51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:17:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70fc77f78663fef64f40cfa5b11c49e85713280bcd96a4391bd7f384c8469d51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:17:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70fc77f78663fef64f40cfa5b11c49e85713280bcd96a4391bd7f384c8469d51/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 15:17:54 compute-0 podman[247396]: 2026-02-01 15:17:54.495037849 +0000 UTC m=+0.142663890 container init cca3687477fbdcced2bba0a1cfec6199aaee6b997c326486ba76862bef922ab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hellman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 01 15:17:54 compute-0 podman[247396]: 2026-02-01 15:17:54.501813268 +0000 UTC m=+0.149439229 container start cca3687477fbdcced2bba0a1cfec6199aaee6b997c326486ba76862bef922ab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:17:54 compute-0 podman[247396]: 2026-02-01 15:17:54.505504481 +0000 UTC m=+0.153130442 container attach cca3687477fbdcced2bba0a1cfec6199aaee6b997c326486ba76862bef922ab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:17:54 compute-0 brave_hellman[247412]: --> passed data devices: 0 physical, 3 LVM
Feb 01 15:17:54 compute-0 brave_hellman[247412]: --> All data devices are unavailable
Feb 01 15:17:54 compute-0 systemd[1]: libpod-cca3687477fbdcced2bba0a1cfec6199aaee6b997c326486ba76862bef922ab5.scope: Deactivated successfully.
Feb 01 15:17:54 compute-0 podman[247396]: 2026-02-01 15:17:54.956671884 +0000 UTC m=+0.604297875 container died cca3687477fbdcced2bba0a1cfec6199aaee6b997c326486ba76862bef922ab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hellman, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 01 15:17:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-70fc77f78663fef64f40cfa5b11c49e85713280bcd96a4391bd7f384c8469d51-merged.mount: Deactivated successfully.
Feb 01 15:17:54 compute-0 podman[247396]: 2026-02-01 15:17:54.996442503 +0000 UTC m=+0.644068464 container remove cca3687477fbdcced2bba0a1cfec6199aaee6b997c326486ba76862bef922ab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 01 15:17:55 compute-0 systemd[1]: libpod-conmon-cca3687477fbdcced2bba0a1cfec6199aaee6b997c326486ba76862bef922ab5.scope: Deactivated successfully.
Feb 01 15:17:55 compute-0 sudo[247319]: pam_unix(sudo:session): session closed for user root
Feb 01 15:17:55 compute-0 sudo[247443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:17:55 compute-0 sudo[247443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:17:55 compute-0 sudo[247443]: pam_unix(sudo:session): session closed for user root
Feb 01 15:17:55 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:17:55 compute-0 sudo[247468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 15:17:55 compute-0 sudo[247468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:17:55 compute-0 nova_compute[238794]: 2026-02-01 15:17:55.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:17:55 compute-0 nova_compute[238794]: 2026-02-01 15:17:55.341 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:17:55 compute-0 nova_compute[238794]: 2026-02-01 15:17:55.341 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:17:55 compute-0 nova_compute[238794]: 2026-02-01 15:17:55.342 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:17:55 compute-0 nova_compute[238794]: 2026-02-01 15:17:55.342 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 01 15:17:55 compute-0 nova_compute[238794]: 2026-02-01 15:17:55.342 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:17:55 compute-0 podman[247505]: 2026-02-01 15:17:55.385562425 +0000 UTC m=+0.044642336 container create 9181fd75635a3dee6ffe49deeead43606ce81b5d94cd934609bc1406217a424b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 01 15:17:55 compute-0 systemd[1]: Started libpod-conmon-9181fd75635a3dee6ffe49deeead43606ce81b5d94cd934609bc1406217a424b.scope.
Feb 01 15:17:55 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:17:55 compute-0 podman[247505]: 2026-02-01 15:17:55.453902542 +0000 UTC m=+0.112982463 container init 9181fd75635a3dee6ffe49deeead43606ce81b5d94cd934609bc1406217a424b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 01 15:17:55 compute-0 podman[247505]: 2026-02-01 15:17:55.459827177 +0000 UTC m=+0.118907088 container start 9181fd75635a3dee6ffe49deeead43606ce81b5d94cd934609bc1406217a424b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:17:55 compute-0 podman[247505]: 2026-02-01 15:17:55.462375638 +0000 UTC m=+0.121455549 container attach 9181fd75635a3dee6ffe49deeead43606ce81b5d94cd934609bc1406217a424b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:17:55 compute-0 competent_bell[247522]: 167 167
Feb 01 15:17:55 compute-0 systemd[1]: libpod-9181fd75635a3dee6ffe49deeead43606ce81b5d94cd934609bc1406217a424b.scope: Deactivated successfully.
Feb 01 15:17:55 compute-0 podman[247505]: 2026-02-01 15:17:55.36814408 +0000 UTC m=+0.027224031 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:17:55 compute-0 podman[247505]: 2026-02-01 15:17:55.463845899 +0000 UTC m=+0.122925810 container died 9181fd75635a3dee6ffe49deeead43606ce81b5d94cd934609bc1406217a424b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 01 15:17:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d03cf9ead78f7c7444d260a5ba547315bf77c150357050192d9aadada30f3b3-merged.mount: Deactivated successfully.
Feb 01 15:17:55 compute-0 podman[247505]: 2026-02-01 15:17:55.500945504 +0000 UTC m=+0.160025415 container remove 9181fd75635a3dee6ffe49deeead43606ce81b5d94cd934609bc1406217a424b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:17:55 compute-0 systemd[1]: libpod-conmon-9181fd75635a3dee6ffe49deeead43606ce81b5d94cd934609bc1406217a424b.scope: Deactivated successfully.
Feb 01 15:17:55 compute-0 podman[247565]: 2026-02-01 15:17:55.660807552 +0000 UTC m=+0.041620302 container create d5ee530d4ee23af59a191bb934b7919936bff93c512f84019c55945b5756ec63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wozniak, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 01 15:17:55 compute-0 systemd[1]: Started libpod-conmon-d5ee530d4ee23af59a191bb934b7919936bff93c512f84019c55945b5756ec63.scope.
Feb 01 15:17:55 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a70f3dfeb0e7a93a46924e302d1c4267ea58126fa1bc43c5bfbd1ee9706dc104/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a70f3dfeb0e7a93a46924e302d1c4267ea58126fa1bc43c5bfbd1ee9706dc104/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a70f3dfeb0e7a93a46924e302d1c4267ea58126fa1bc43c5bfbd1ee9706dc104/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a70f3dfeb0e7a93a46924e302d1c4267ea58126fa1bc43c5bfbd1ee9706dc104/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:17:55 compute-0 podman[247565]: 2026-02-01 15:17:55.642741838 +0000 UTC m=+0.023554608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:17:55 compute-0 podman[247565]: 2026-02-01 15:17:55.747993594 +0000 UTC m=+0.128806404 container init d5ee530d4ee23af59a191bb934b7919936bff93c512f84019c55945b5756ec63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Feb 01 15:17:55 compute-0 podman[247565]: 2026-02-01 15:17:55.753796446 +0000 UTC m=+0.134609186 container start d5ee530d4ee23af59a191bb934b7919936bff93c512f84019c55945b5756ec63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wozniak, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 01 15:17:55 compute-0 podman[247565]: 2026-02-01 15:17:55.75682582 +0000 UTC m=+0.137638660 container attach d5ee530d4ee23af59a191bb934b7919936bff93c512f84019c55945b5756ec63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wozniak, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:17:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:17:55 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1670416892' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:17:55 compute-0 nova_compute[238794]: 2026-02-01 15:17:55.822 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:17:55 compute-0 nova_compute[238794]: 2026-02-01 15:17:55.963 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 01 15:17:55 compute-0 nova_compute[238794]: 2026-02-01 15:17:55.965 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5029MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 01 15:17:55 compute-0 nova_compute[238794]: 2026-02-01 15:17:55.965 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:17:55 compute-0 nova_compute[238794]: 2026-02-01 15:17:55.966 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:17:56 compute-0 nova_compute[238794]: 2026-02-01 15:17:56.039 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 01 15:17:56 compute-0 nova_compute[238794]: 2026-02-01 15:17:56.039 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 01 15:17:56 compute-0 cool_wozniak[247582]: {
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:     "0": [
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:         {
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "devices": [
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "/dev/loop3"
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             ],
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "lv_name": "ceph_lv0",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "lv_size": "21470642176",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "name": "ceph_lv0",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "tags": {
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.cluster_name": "ceph",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.crush_device_class": "",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.encrypted": "0",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.objectstore": "bluestore",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.osd_id": "0",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.type": "block",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.vdo": "0",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.with_tpm": "0"
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             },
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "type": "block",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "vg_name": "ceph_vg0"
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:         }
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:     ],
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:     "1": [
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:         {
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "devices": [
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "/dev/loop4"
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             ],
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "lv_name": "ceph_lv1",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "lv_size": "21470642176",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "name": "ceph_lv1",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "tags": {
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.cluster_name": "ceph",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.crush_device_class": "",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.encrypted": "0",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.objectstore": "bluestore",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.osd_id": "1",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.type": "block",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.vdo": "0",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.with_tpm": "0"
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             },
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "type": "block",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "vg_name": "ceph_vg1"
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:         }
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:     ],
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:     "2": [
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:         {
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "devices": [
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "/dev/loop5"
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             ],
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "lv_name": "ceph_lv2",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "lv_size": "21470642176",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "name": "ceph_lv2",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "tags": {
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.cluster_name": "ceph",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.crush_device_class": "",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.encrypted": "0",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.objectstore": "bluestore",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.osd_id": "2",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.type": "block",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.vdo": "0",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:                 "ceph.with_tpm": "0"
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             },
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "type": "block",
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:             "vg_name": "ceph_vg2"
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:         }
Feb 01 15:17:56 compute-0 cool_wozniak[247582]:     ]
Feb 01 15:17:56 compute-0 cool_wozniak[247582]: }
Feb 01 15:17:56 compute-0 nova_compute[238794]: 2026-02-01 15:17:56.058 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:17:56 compute-0 systemd[1]: libpod-d5ee530d4ee23af59a191bb934b7919936bff93c512f84019c55945b5756ec63.scope: Deactivated successfully.
Feb 01 15:17:56 compute-0 podman[247565]: 2026-02-01 15:17:56.071082295 +0000 UTC m=+0.451895115 container died d5ee530d4ee23af59a191bb934b7919936bff93c512f84019c55945b5756ec63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wozniak, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:17:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-a70f3dfeb0e7a93a46924e302d1c4267ea58126fa1bc43c5bfbd1ee9706dc104-merged.mount: Deactivated successfully.
Feb 01 15:17:56 compute-0 podman[247565]: 2026-02-01 15:17:56.110025751 +0000 UTC m=+0.490838531 container remove d5ee530d4ee23af59a191bb934b7919936bff93c512f84019c55945b5756ec63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_wozniak, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 01 15:17:56 compute-0 ceph-mon[75179]: pgmap v969: 305 pgs: 305 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 KiB/s wr, 9 op/s
Feb 01 15:17:56 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1670416892' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:17:56 compute-0 systemd[1]: libpod-conmon-d5ee530d4ee23af59a191bb934b7919936bff93c512f84019c55945b5756ec63.scope: Deactivated successfully.
Feb 01 15:17:56 compute-0 sudo[247468]: pam_unix(sudo:session): session closed for user root
Feb 01 15:17:56 compute-0 sudo[247609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:17:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:17:56 compute-0 sudo[247609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:17:56 compute-0 sudo[247609]: pam_unix(sudo:session): session closed for user root
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "92466679-2a01-470b-96b5-c6d88c0b6509", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:92466679-2a01-470b-96b5-c6d88c0b6509, vol_name:cephfs) < ""
Feb 01 15:17:56 compute-0 sudo[247651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 15:17:56 compute-0 sudo[247651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/92466679-2a01-470b-96b5-c6d88c0b6509/1fbe2a21-7c37-459f-8e2c-6b17c0091c4a'.
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/92466679-2a01-470b-96b5-c6d88c0b6509/.meta.tmp'
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/92466679-2a01-470b-96b5-c6d88c0b6509/.meta.tmp' to config b'/volumes/_nogroup/92466679-2a01-470b-96b5-c6d88c0b6509/.meta'
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:92466679-2a01-470b-96b5-c6d88c0b6509, vol_name:cephfs) < ""
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "92466679-2a01-470b-96b5-c6d88c0b6509", "format": "json"}]: dispatch
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:92466679-2a01-470b-96b5-c6d88c0b6509, vol_name:cephfs) < ""
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:92466679-2a01-470b-96b5-c6d88c0b6509, vol_name:cephfs) < ""
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ff89896c-730f-4d0f-b5d3-5b63ed6c492d", "snap_name": "337552e6-dd85-4f6d-9610-99737469dd80_028bb641-87db-46c7-9018-3f8d054e8e72", "force": true, "format": "json"}]: dispatch
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:337552e6-dd85-4f6d-9610-99737469dd80_028bb641-87db-46c7-9018-3f8d054e8e72, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ff89896c-730f-4d0f-b5d3-5b63ed6c492d/.meta.tmp'
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ff89896c-730f-4d0f-b5d3-5b63ed6c492d/.meta.tmp' to config b'/volumes/_nogroup/ff89896c-730f-4d0f-b5d3-5b63ed6c492d/.meta'
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:337552e6-dd85-4f6d-9610-99737469dd80_028bb641-87db-46c7-9018-3f8d054e8e72, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb 01 15:17:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:17:56 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ff89896c-730f-4d0f-b5d3-5b63ed6c492d", "snap_name": "337552e6-dd85-4f6d-9610-99737469dd80", "force": true, "format": "json"}]: dispatch
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:337552e6-dd85-4f6d-9610-99737469dd80, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ff89896c-730f-4d0f-b5d3-5b63ed6c492d/.meta.tmp'
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ff89896c-730f-4d0f-b5d3-5b63ed6c492d/.meta.tmp' to config b'/volumes/_nogroup/ff89896c-730f-4d0f-b5d3-5b63ed6c492d/.meta'
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:337552e6-dd85-4f6d-9610-99737469dd80, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 58 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 121 KiB/s wr, 13 op/s
Feb 01 15:17:56 compute-0 podman[247689]: 2026-02-01 15:17:56.506428597 +0000 UTC m=+0.054188943 container create f7fa383c269c2d8a1dab47076903d3081f03a2ba0b45198b890bd4072e2677ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_elion, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:17:56 compute-0 systemd[1]: Started libpod-conmon-f7fa383c269c2d8a1dab47076903d3081f03a2ba0b45198b890bd4072e2677ed.scope.
Feb 01 15:17:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:17:56 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3363305773' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:17:56 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:17:56 compute-0 nova_compute[238794]: 2026-02-01 15:17:56.571 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:17:56 compute-0 podman[247689]: 2026-02-01 15:17:56.480990247 +0000 UTC m=+0.028750643 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:17:56 compute-0 nova_compute[238794]: 2026-02-01 15:17:56.581 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 01 15:17:56 compute-0 podman[247689]: 2026-02-01 15:17:56.58397934 +0000 UTC m=+0.131739746 container init f7fa383c269c2d8a1dab47076903d3081f03a2ba0b45198b890bd4072e2677ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_elion, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 01 15:17:56 compute-0 podman[247689]: 2026-02-01 15:17:56.588005752 +0000 UTC m=+0.135766088 container start f7fa383c269c2d8a1dab47076903d3081f03a2ba0b45198b890bd4072e2677ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_elion, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 01 15:17:56 compute-0 podman[247689]: 2026-02-01 15:17:56.59115983 +0000 UTC m=+0.138920176 container attach f7fa383c269c2d8a1dab47076903d3081f03a2ba0b45198b890bd4072e2677ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_elion, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:17:56 compute-0 gallant_elion[247705]: 167 167
Feb 01 15:17:56 compute-0 systemd[1]: libpod-f7fa383c269c2d8a1dab47076903d3081f03a2ba0b45198b890bd4072e2677ed.scope: Deactivated successfully.
Feb 01 15:17:56 compute-0 podman[247689]: 2026-02-01 15:17:56.592222039 +0000 UTC m=+0.139982375 container died f7fa383c269c2d8a1dab47076903d3081f03a2ba0b45198b890bd4072e2677ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_elion, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Feb 01 15:17:56 compute-0 nova_compute[238794]: 2026-02-01 15:17:56.603 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 01 15:17:56 compute-0 nova_compute[238794]: 2026-02-01 15:17:56.605 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 01 15:17:56 compute-0 nova_compute[238794]: 2026-02-01 15:17:56.606 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:17:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3d133afa6e01b60f260c1df3e23517bff893e3fcb0ea06d5ae88f0e4dc84861-merged.mount: Deactivated successfully.
Feb 01 15:17:56 compute-0 podman[247689]: 2026-02-01 15:17:56.63026059 +0000 UTC m=+0.178020936 container remove f7fa383c269c2d8a1dab47076903d3081f03a2ba0b45198b890bd4072e2677ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 01 15:17:56 compute-0 systemd[1]: libpod-conmon-f7fa383c269c2d8a1dab47076903d3081f03a2ba0b45198b890bd4072e2677ed.scope: Deactivated successfully.
Feb 01 15:17:56 compute-0 podman[247731]: 2026-02-01 15:17:56.77546758 +0000 UTC m=+0.045697295 container create 1279b71cb8373bd6dd4e6a5a9207569a0c784d8b82eab82a982357926c647aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_nightingale, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:17:56 compute-0 systemd[1]: Started libpod-conmon-1279b71cb8373bd6dd4e6a5a9207569a0c784d8b82eab82a982357926c647aa5.scope.
Feb 01 15:17:56 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:17:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d47d271aae0b0db03358aef2b3e7be2e169e1da5f5989cd1e56d5889c4f844c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d47d271aae0b0db03358aef2b3e7be2e169e1da5f5989cd1e56d5889c4f844c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:17:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d47d271aae0b0db03358aef2b3e7be2e169e1da5f5989cd1e56d5889c4f844c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:17:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d47d271aae0b0db03358aef2b3e7be2e169e1da5f5989cd1e56d5889c4f844c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:17:56 compute-0 podman[247731]: 2026-02-01 15:17:56.758493217 +0000 UTC m=+0.028722892 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:17:56 compute-0 podman[247731]: 2026-02-01 15:17:56.879704857 +0000 UTC m=+0.149934562 container init 1279b71cb8373bd6dd4e6a5a9207569a0c784d8b82eab82a982357926c647aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_nightingale, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 01 15:17:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb 01 15:17:56 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:17:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Feb 01 15:17:56 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 01 15:17:56 compute-0 podman[247731]: 2026-02-01 15:17:56.892677459 +0000 UTC m=+0.162907154 container start 1279b71cb8373bd6dd4e6a5a9207569a0c784d8b82eab82a982357926c647aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:17:56 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 01 15:17:56 compute-0 podman[247731]: 2026-02-01 15:17:56.896534767 +0000 UTC m=+0.166764482 container attach 1279b71cb8373bd6dd4e6a5a9207569a0c784d8b82eab82a982357926c647aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:17:56 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:17:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:17:57 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:17:57 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3363305773' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:17:57 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:17:57 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 01 15:17:57 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 01 15:17:57 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57b6c133-b657-4e29-ab3e-f40863c80360", "format": "json"}]: dispatch
Feb 01 15:17:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:57b6c133-b657-4e29-ab3e-f40863c80360, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:17:57 compute-0 lvm[247826]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:17:57 compute-0 lvm[247826]: VG ceph_vg0 finished
Feb 01 15:17:57 compute-0 lvm[247829]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:17:57 compute-0 lvm[247829]: VG ceph_vg1 finished
Feb 01 15:17:57 compute-0 lvm[247831]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:17:57 compute-0 lvm[247831]: VG ceph_vg2 finished
Feb 01 15:17:57 compute-0 practical_nightingale[247747]: {}
Feb 01 15:17:57 compute-0 systemd[1]: libpod-1279b71cb8373bd6dd4e6a5a9207569a0c784d8b82eab82a982357926c647aa5.scope: Deactivated successfully.
Feb 01 15:17:57 compute-0 systemd[1]: libpod-1279b71cb8373bd6dd4e6a5a9207569a0c784d8b82eab82a982357926c647aa5.scope: Consumed 1.066s CPU time.
Feb 01 15:17:57 compute-0 podman[247731]: 2026-02-01 15:17:57.615625491 +0000 UTC m=+0.885855156 container died 1279b71cb8373bd6dd4e6a5a9207569a0c784d8b82eab82a982357926c647aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_nightingale, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:17:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d47d271aae0b0db03358aef2b3e7be2e169e1da5f5989cd1e56d5889c4f844c-merged.mount: Deactivated successfully.
Feb 01 15:17:57 compute-0 podman[247731]: 2026-02-01 15:17:57.659741712 +0000 UTC m=+0.929971377 container remove 1279b71cb8373bd6dd4e6a5a9207569a0c784d8b82eab82a982357926c647aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 01 15:17:57 compute-0 systemd[1]: libpod-conmon-1279b71cb8373bd6dd4e6a5a9207569a0c784d8b82eab82a982357926c647aa5.scope: Deactivated successfully.
Feb 01 15:17:57 compute-0 sudo[247651]: pam_unix(sudo:session): session closed for user root
Feb 01 15:17:57 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:17:57 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:17:57 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:17:57 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:17:57 compute-0 sudo[247846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 15:17:57 compute-0 sudo[247846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:17:57 compute-0 sudo[247846]: pam_unix(sudo:session): session closed for user root
Feb 01 15:17:58 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "92466679-2a01-470b-96b5-c6d88c0b6509", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:17:58 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "92466679-2a01-470b-96b5-c6d88c0b6509", "format": "json"}]: dispatch
Feb 01 15:17:58 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ff89896c-730f-4d0f-b5d3-5b63ed6c492d", "snap_name": "337552e6-dd85-4f6d-9610-99737469dd80_028bb641-87db-46c7-9018-3f8d054e8e72", "force": true, "format": "json"}]: dispatch
Feb 01 15:17:58 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ff89896c-730f-4d0f-b5d3-5b63ed6c492d", "snap_name": "337552e6-dd85-4f6d-9610-99737469dd80", "force": true, "format": "json"}]: dispatch
Feb 01 15:17:58 compute-0 ceph-mon[75179]: pgmap v970: 305 pgs: 305 active+clean; 58 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 121 KiB/s wr, 13 op/s
Feb 01 15:17:58 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:17:58 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:17:58 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:17:58 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:17:58 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 58 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 94 KiB/s wr, 10 op/s
Feb 01 15:17:59 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57b6c133-b657-4e29-ab3e-f40863c80360", "format": "json"}]: dispatch
Feb 01 15:17:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:57b6c133-b657-4e29-ab3e-f40863c80360, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:17:59 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "57b6c133-b657-4e29-ab3e-f40863c80360", "format": "json"}]: dispatch
Feb 01 15:17:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:57b6c133-b657-4e29-ab3e-f40863c80360, vol_name:cephfs) < ""
Feb 01 15:17:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:57b6c133-b657-4e29-ab3e-f40863c80360, vol_name:cephfs) < ""
Feb 01 15:17:59 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:17:59 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:17:59 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ff89896c-730f-4d0f-b5d3-5b63ed6c492d", "format": "json"}]: dispatch
Feb 01 15:17:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:17:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:17:59 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff89896c-730f-4d0f-b5d3-5b63ed6c492d' of type subvolume
Feb 01 15:17:59 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:17:59.953+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff89896c-730f-4d0f-b5d3-5b63ed6c492d' of type subvolume
Feb 01 15:17:59 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ff89896c-730f-4d0f-b5d3-5b63ed6c492d", "force": true, "format": "json"}]: dispatch
Feb 01 15:17:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb 01 15:17:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ff89896c-730f-4d0f-b5d3-5b63ed6c492d'' moved to trashcan
Feb 01 15:17:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:17:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff89896c-730f-4d0f-b5d3-5b63ed6c492d, vol_name:cephfs) < ""
Feb 01 15:18:00 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "92466679-2a01-470b-96b5-c6d88c0b6509", "format": "json"}]: dispatch
Feb 01 15:18:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:92466679-2a01-470b-96b5-c6d88c0b6509, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:18:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:92466679-2a01-470b-96b5-c6d88c0b6509, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:18:00 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '92466679-2a01-470b-96b5-c6d88c0b6509' of type subvolume
Feb 01 15:18:00 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:00.095+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '92466679-2a01-470b-96b5-c6d88c0b6509' of type subvolume
Feb 01 15:18:00 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "92466679-2a01-470b-96b5-c6d88c0b6509", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:92466679-2a01-470b-96b5-c6d88c0b6509, vol_name:cephfs) < ""
Feb 01 15:18:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/92466679-2a01-470b-96b5-c6d88c0b6509'' moved to trashcan
Feb 01 15:18:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:18:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:92466679-2a01-470b-96b5-c6d88c0b6509, vol_name:cephfs) < ""
Feb 01 15:18:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Feb 01 15:18:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Feb 01 15:18:00 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Feb 01 15:18:00 compute-0 ceph-mon[75179]: pgmap v971: 305 pgs: 305 active+clean; 58 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 94 KiB/s wr, 10 op/s
Feb 01 15:18:00 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:18:00 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:18:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:18:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb 01 15:18:00 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:18:00 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice_bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:18:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:18:00 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:18:00 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:18:00 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 58 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 113 KiB/s wr, 12 op/s
Feb 01 15:18:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:18:00 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57b6c133-b657-4e29-ab3e-f40863c80360", "format": "json"}]: dispatch
Feb 01 15:18:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:57b6c133-b657-4e29-ab3e-f40863c80360, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:18:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:57b6c133-b657-4e29-ab3e-f40863c80360, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:18:00 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "57b6c133-b657-4e29-ab3e-f40863c80360", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:57b6c133-b657-4e29-ab3e-f40863c80360, vol_name:cephfs) < ""
Feb 01 15:18:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/57b6c133-b657-4e29-ab3e-f40863c80360'' moved to trashcan
Feb 01 15:18:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:18:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:57b6c133-b657-4e29-ab3e-f40863c80360, vol_name:cephfs) < ""
Feb 01 15:18:01 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "57b6c133-b657-4e29-ab3e-f40863c80360", "format": "json"}]: dispatch
Feb 01 15:18:01 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ff89896c-730f-4d0f-b5d3-5b63ed6c492d", "format": "json"}]: dispatch
Feb 01 15:18:01 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ff89896c-730f-4d0f-b5d3-5b63ed6c492d", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:01 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "92466679-2a01-470b-96b5-c6d88c0b6509", "format": "json"}]: dispatch
Feb 01 15:18:01 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "92466679-2a01-470b-96b5-c6d88c0b6509", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:01 compute-0 ceph-mon[75179]: osdmap e151: 3 total, 3 up, 3 in
Feb 01 15:18:01 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:18:01 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:18:01 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:18:01 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:18:01 compute-0 ceph-mon[75179]: pgmap v973: 305 pgs: 305 active+clean; 58 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 113 KiB/s wr, 12 op/s
Feb 01 15:18:01 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57b6c133-b657-4e29-ab3e-f40863c80360", "format": "json"}]: dispatch
Feb 01 15:18:01 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "57b6c133-b657-4e29-ab3e-f40863c80360", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:18:02 compute-0 podman[247871]: 2026-02-01 15:18:02.002976302 +0000 UTC m=+0.089531328 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 01 15:18:02 compute-0 podman[247872]: 2026-02-01 15:18:02.024245166 +0000 UTC m=+0.114713511 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 01 15:18:02 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 58 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 125 KiB/s wr, 12 op/s
Feb 01 15:18:03 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f71d70ca-3bed-407e-bd13-18c8cbf0995f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:18:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f71d70ca-3bed-407e-bd13-18c8cbf0995f, vol_name:cephfs) < ""
Feb 01 15:18:03 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/f71d70ca-3bed-407e-bd13-18c8cbf0995f/4a423dca-0a02-4c3b-a2ec-997402614fd5'.
Feb 01 15:18:03 compute-0 ceph-mon[75179]: pgmap v974: 305 pgs: 305 active+clean; 58 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 125 KiB/s wr, 12 op/s
Feb 01 15:18:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f71d70ca-3bed-407e-bd13-18c8cbf0995f/.meta.tmp'
Feb 01 15:18:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f71d70ca-3bed-407e-bd13-18c8cbf0995f/.meta.tmp' to config b'/volumes/_nogroup/f71d70ca-3bed-407e-bd13-18c8cbf0995f/.meta'
Feb 01 15:18:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f71d70ca-3bed-407e-bd13-18c8cbf0995f, vol_name:cephfs) < ""
Feb 01 15:18:03 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f71d70ca-3bed-407e-bd13-18c8cbf0995f", "format": "json"}]: dispatch
Feb 01 15:18:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f71d70ca-3bed-407e-bd13-18c8cbf0995f, vol_name:cephfs) < ""
Feb 01 15:18:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f71d70ca-3bed-407e-bd13-18c8cbf0995f, vol_name:cephfs) < ""
Feb 01 15:18:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:18:03 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:18:04 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:18:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb 01 15:18:04 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:18:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Feb 01 15:18:04 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 01 15:18:04 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 01 15:18:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:04 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:18:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:18:04 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:18:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:18:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:04 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "snap_name": "c61fb956-cb54-4a69-b984-796f123291a0_83c92ce7-3e64-4538-8f22-ddff58a7c70b", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c61fb956-cb54-4a69-b984-796f123291a0_83c92ce7-3e64-4538-8f22-ddff58a7c70b, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb 01 15:18:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta.tmp'
Feb 01 15:18:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta.tmp' to config b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta'
Feb 01 15:18:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c61fb956-cb54-4a69-b984-796f123291a0_83c92ce7-3e64-4538-8f22-ddff58a7c70b, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb 01 15:18:04 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "snap_name": "c61fb956-cb54-4a69-b984-796f123291a0", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c61fb956-cb54-4a69-b984-796f123291a0, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb 01 15:18:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta.tmp'
Feb 01 15:18:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta.tmp' to config b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb/.meta'
Feb 01 15:18:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c61fb956-cb54-4a69-b984-796f123291a0, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb 01 15:18:04 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 58 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 125 KiB/s wr, 12 op/s
Feb 01 15:18:04 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f71d70ca-3bed-407e-bd13-18c8cbf0995f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:18:04 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f71d70ca-3bed-407e-bd13-18c8cbf0995f", "format": "json"}]: dispatch
Feb 01 15:18:04 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:18:04 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:18:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:18:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 01 15:18:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 01 15:18:04 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:18:06 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "snap_name": "c61fb956-cb54-4a69-b984-796f123291a0_83c92ce7-3e64-4538-8f22-ddff58a7c70b", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:06 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "snap_name": "c61fb956-cb54-4a69-b984-796f123291a0", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:06 compute-0 ceph-mon[75179]: pgmap v975: 305 pgs: 305 active+clean; 58 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 125 KiB/s wr, 12 op/s
Feb 01 15:18:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:18:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Feb 01 15:18:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Feb 01 15:18:06 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Feb 01 15:18:06 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 59 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 158 KiB/s wr, 17 op/s
Feb 01 15:18:06 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb 01 15:18:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Feb 01 15:18:07 compute-0 ceph-mon[75179]: osdmap e152: 3 total, 3 up, 3 in
Feb 01 15:18:07 compute-0 ceph-mon[75179]: pgmap v977: 305 pgs: 305 active+clean; 59 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 158 KiB/s wr, 17 op/s
Feb 01 15:18:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Feb 01 15:18:07 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Feb 01 15:18:07 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:18:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:18:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb 01 15:18:07 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:18:07 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:18:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:18:07 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:18:07 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:18:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:18:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:18:07.813 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:18:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:18:07.814 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:18:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:18:07.814 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:18:07 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f71d70ca-3bed-407e-bd13-18c8cbf0995f", "format": "json"}]: dispatch
Feb 01 15:18:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f71d70ca-3bed-407e-bd13-18c8cbf0995f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:18:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f71d70ca-3bed-407e-bd13-18c8cbf0995f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:18:07 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:07.855+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f71d70ca-3bed-407e-bd13-18c8cbf0995f' of type subvolume
Feb 01 15:18:07 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f71d70ca-3bed-407e-bd13-18c8cbf0995f' of type subvolume
Feb 01 15:18:07 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f71d70ca-3bed-407e-bd13-18c8cbf0995f", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f71d70ca-3bed-407e-bd13-18c8cbf0995f, vol_name:cephfs) < ""
Feb 01 15:18:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f71d70ca-3bed-407e-bd13-18c8cbf0995f'' moved to trashcan
Feb 01 15:18:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:18:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f71d70ca-3bed-407e-bd13-18c8cbf0995f, vol_name:cephfs) < ""
Feb 01 15:18:07 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "format": "json"}]: dispatch
Feb 01 15:18:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:18:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:18:07 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:07.905+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'eaae1ab0-0f33-4607-9838-62c2bdc360fb' of type subvolume
Feb 01 15:18:07 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'eaae1ab0-0f33-4607-9838-62c2bdc360fb' of type subvolume
Feb 01 15:18:07 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb 01 15:18:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/eaae1ab0-0f33-4607-9838-62c2bdc360fb'' moved to trashcan
Feb 01 15:18:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:18:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:eaae1ab0-0f33-4607-9838-62c2bdc360fb, vol_name:cephfs) < ""
Feb 01 15:18:08 compute-0 ceph-mon[75179]: osdmap e153: 3 total, 3 up, 3 in
Feb 01 15:18:08 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:18:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:18:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:18:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:18:08 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f71d70ca-3bed-407e-bd13-18c8cbf0995f", "format": "json"}]: dispatch
Feb 01 15:18:08 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f71d70ca-3bed-407e-bd13-18c8cbf0995f", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:08 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "format": "json"}]: dispatch
Feb 01 15:18:08 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "eaae1ab0-0f33-4607-9838-62c2bdc360fb", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:08 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 59 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 158 KiB/s wr, 17 op/s
Feb 01 15:18:09 compute-0 ceph-mon[75179]: pgmap v979: 305 pgs: 305 active+clean; 59 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 158 KiB/s wr, 17 op/s
Feb 01 15:18:10 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 59 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 56 KiB/s wr, 7 op/s
Feb 01 15:18:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:18:11 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:18:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb 01 15:18:11 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:18:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Feb 01 15:18:11 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 01 15:18:11 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 01 15:18:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:11 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:18:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:18:11 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:18:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:18:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:11 compute-0 ceph-mon[75179]: pgmap v980: 305 pgs: 305 active+clean; 59 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 56 KiB/s wr, 7 op/s
Feb 01 15:18:11 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:18:11 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 01 15:18:11 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 01 15:18:11 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb", "snap_name": "1e96b528-01bb-4d75-b3fa-211a85006c95_b6d4e46b-8d52-41c3-ae82-52a9e57131ed", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1e96b528-01bb-4d75-b3fa-211a85006c95_b6d4e46b-8d52-41c3-ae82-52a9e57131ed, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb 01 15:18:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb/.meta.tmp'
Feb 01 15:18:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb/.meta.tmp' to config b'/volumes/_nogroup/85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb/.meta'
Feb 01 15:18:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1e96b528-01bb-4d75-b3fa-211a85006c95_b6d4e46b-8d52-41c3-ae82-52a9e57131ed, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb 01 15:18:11 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb", "snap_name": "1e96b528-01bb-4d75-b3fa-211a85006c95", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1e96b528-01bb-4d75-b3fa-211a85006c95, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb 01 15:18:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb/.meta.tmp'
Feb 01 15:18:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb/.meta.tmp' to config b'/volumes/_nogroup/85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb/.meta'
Feb 01 15:18:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1e96b528-01bb-4d75-b3fa-211a85006c95, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb 01 15:18:12 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 114 KiB/s wr, 14 op/s
Feb 01 15:18:12 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:18:12 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:18:12 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb", "snap_name": "1e96b528-01bb-4d75-b3fa-211a85006c95_b6d4e46b-8d52-41c3-ae82-52a9e57131ed", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:12 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb", "snap_name": "1e96b528-01bb-4d75-b3fa-211a85006c95", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:13 compute-0 ceph-mon[75179]: pgmap v981: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 114 KiB/s wr, 14 op/s
Feb 01 15:18:14 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 746 B/s rd, 57 KiB/s wr, 6 op/s
Feb 01 15:18:14 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:18:14 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:18:14 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb 01 15:18:14 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:18:14 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:18:14 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:18:14 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:18:14 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:18:14 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:18:15 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb", "format": "json"}]: dispatch
Feb 01 15:18:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:18:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:18:15 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:15.336+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb' of type subvolume
Feb 01 15:18:15 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb' of type subvolume
Feb 01 15:18:15 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb 01 15:18:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb'' moved to trashcan
Feb 01 15:18:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:18:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb, vol_name:cephfs) < ""
Feb 01 15:18:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Feb 01 15:18:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Feb 01 15:18:15 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Feb 01 15:18:15 compute-0 ceph-mon[75179]: pgmap v982: 305 pgs: 305 active+clean; 59 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 746 B/s rd, 57 KiB/s wr, 6 op/s
Feb 01 15:18:15 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:18:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:18:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:18:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:18:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:18:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Feb 01 15:18:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Feb 01 15:18:16 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Feb 01 15:18:16 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 136 KiB/s wr, 15 op/s
Feb 01 15:18:16 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb", "format": "json"}]: dispatch
Feb 01 15:18:16 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "85ddb4b4-f1ca-4471-8fc6-5c185f91fcdb", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:16 compute-0 ceph-mon[75179]: osdmap e154: 3 total, 3 up, 3 in
Feb 01 15:18:16 compute-0 ceph-mon[75179]: osdmap e155: 3 total, 3 up, 3 in
Feb 01 15:18:17 compute-0 ceph-mon[75179]: pgmap v985: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 136 KiB/s wr, 15 op/s
Feb 01 15:18:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:18:17
Feb 01 15:18:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:18:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:18:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'vms', 'default.rgw.log', 'backups', 'images', 'volumes']
Feb 01 15:18:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb 01 15:18:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:18:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Feb 01 15:18:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 01 15:18:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 136 KiB/s wr, 15 op/s
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:18:18 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:18:18 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:18:18 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 01 15:18:18 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:18:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:18:19 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:18:19 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:18:19 compute-0 ceph-mon[75179]: pgmap v986: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 136 KiB/s wr, 15 op/s
Feb 01 15:18:20 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:18:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb 01 15:18:20 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/7d6b8a93-2239-49a1-a970-ce3d1b5be304'.
Feb 01 15:18:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta.tmp'
Feb 01 15:18:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta.tmp' to config b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta'
Feb 01 15:18:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb 01 15:18:20 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "format": "json"}]: dispatch
Feb 01 15:18:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb 01 15:18:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb 01 15:18:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:18:20 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:18:20 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 78 KiB/s wr, 7 op/s
Feb 01 15:18:20 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:18:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:18:21 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:18:21 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "format": "json"}]: dispatch
Feb 01 15:18:21 compute-0 ceph-mon[75179]: pgmap v987: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 78 KiB/s wr, 7 op/s
Feb 01 15:18:21 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:18:21 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:18:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb 01 15:18:21 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:18:21 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:18:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:18:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:18:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:18:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:18:22 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 123 KiB/s wr, 13 op/s
Feb 01 15:18:22 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:18:22 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:18:22 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:18:22 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:18:23 compute-0 ceph-mon[75179]: pgmap v988: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 123 KiB/s wr, 13 op/s
Feb 01 15:18:23 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "snap_name": "383b4f57-c12d-4143-bc64-f94b56aa4406", "format": "json"}]: dispatch
Feb 01 15:18:23 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:383b4f57-c12d-4143-bc64-f94b56aa4406, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb 01 15:18:23 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:383b4f57-c12d-4143-bc64-f94b56aa4406, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb 01 15:18:24 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 458 B/s rd, 110 KiB/s wr, 11 op/s
Feb 01 15:18:24 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "snap_name": "383b4f57-c12d-4143-bc64-f94b56aa4406", "target_sub_name": "e91ca10f-a5ab-4efe-a6b7-448ed904538e", "format": "json"}]: dispatch
Feb 01 15:18:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:383b4f57-c12d-4143-bc64-f94b56aa4406, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, target_sub_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, vol_name:cephfs) < ""
Feb 01 15:18:24 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/461aa132-07e7-4d84-b5b6-931252a109cb'.
Feb 01 15:18:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta.tmp'
Feb 01 15:18:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta.tmp' to config b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta'
Feb 01 15:18:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 147d3459-ad16-48b8-8783-219c36fdf6db for path b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e'
Feb 01 15:18:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta.tmp'
Feb 01 15:18:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta.tmp' to config b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta'
Feb 01 15:18:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:18:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] initiating progress reporting for clones...
Feb 01 15:18:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] progress reporting for clones has been initiated
Feb 01 15:18:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:383b4f57-c12d-4143-bc64-f94b56aa4406, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, target_sub_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, vol_name:cephfs) < ""
Feb 01 15:18:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e
Feb 01 15:18:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, e91ca10f-a5ab-4efe-a6b7-448ed904538e)
Feb 01 15:18:24 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e91ca10f-a5ab-4efe-a6b7-448ed904538e", "format": "json"}]: dispatch
Feb 01 15:18:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:18:24 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "snap_name": "383b4f57-c12d-4143-bc64-f94b56aa4406", "format": "json"}]: dispatch
Feb 01 15:18:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, e91ca10f-a5ab-4efe-a6b7-448ed904538e) -- by 0 seconds
Feb 01 15:18:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:18:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta.tmp'
Feb 01 15:18:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta.tmp' to config b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta'
Feb 01 15:18:25 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:25.550+0000 7f824346c640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:18:25 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:18:25 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:25.550+0000 7f824346c640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:18:25 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:18:25 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:25.550+0000 7f824346c640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:18:25 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:18:25 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:25.550+0000 7f824346c640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:18:25 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:18:25 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:25.550+0000 7f824346c640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:18:25 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:18:25 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:18:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.snap/383b4f57-c12d-4143-bc64-f94b56aa4406/7d6b8a93-2239-49a1-a970-ce3d1b5be304' to b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/461aa132-07e7-4d84-b5b6-931252a109cb'
Feb 01 15:18:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb 01 15:18:25 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:18:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Feb 01 15:18:25 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 01 15:18:25 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 01 15:18:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:25 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:18:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:25 compute-0 ceph-mgr[75469]: [progress INFO root] update: starting ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Feb 01 15:18:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta.tmp'
Feb 01 15:18:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta.tmp' to config b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta'
Feb 01 15:18:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.clone_index] untracking 147d3459-ad16-48b8-8783-219c36fdf6db
Feb 01 15:18:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta.tmp'
Feb 01 15:18:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta.tmp' to config b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta'
Feb 01 15:18:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta.tmp'
Feb 01 15:18:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta.tmp' to config b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e/.meta'
Feb 01 15:18:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, e91ca10f-a5ab-4efe-a6b7-448ed904538e)
Feb 01 15:18:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:18:25 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:18:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:18:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:25 compute-0 ceph-mon[75179]: pgmap v989: 305 pgs: 305 active+clean; 60 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 458 B/s rd, 110 KiB/s wr, 11 op/s
Feb 01 15:18:25 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "snap_name": "383b4f57-c12d-4143-bc64-f94b56aa4406", "target_sub_name": "e91ca10f-a5ab-4efe-a6b7-448ed904538e", "format": "json"}]: dispatch
Feb 01 15:18:25 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e91ca10f-a5ab-4efe-a6b7-448ed904538e", "format": "json"}]: dispatch
Feb 01 15:18:25 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:18:25 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 01 15:18:25 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 01 15:18:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:18:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Feb 01 15:18:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Feb 01 15:18:26 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Feb 01 15:18:26 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cc1a1612-4970-46ec-aefe-db2d1c0f8688", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:18:26 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb 01 15:18:26 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 61 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 920 B/s rd, 122 KiB/s wr, 12 op/s
Feb 01 15:18:26 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/cc1a1612-4970-46ec-aefe-db2d1c0f8688/b7cb2b49-e944-42cc-9aea-91bc5616fa3a'.
Feb 01 15:18:26 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cc1a1612-4970-46ec-aefe-db2d1c0f8688/.meta.tmp'
Feb 01 15:18:26 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cc1a1612-4970-46ec-aefe-db2d1c0f8688/.meta.tmp' to config b'/volumes/_nogroup/cc1a1612-4970-46ec-aefe-db2d1c0f8688/.meta'
Feb 01 15:18:26 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb 01 15:18:26 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cc1a1612-4970-46ec-aefe-db2d1c0f8688", "format": "json"}]: dispatch
Feb 01 15:18:26 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb 01 15:18:26 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb 01 15:18:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:18:26 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:18:26 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] removing progress bars from "ceph status" output
Feb 01 15:18:26 compute-0 ceph-mgr[75469]: [progress INFO root] complete: finished ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Feb 01 15:18:26 compute-0 ceph-mgr[75469]: [progress INFO root] Completed event mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%) in 1 seconds
Feb 01 15:18:26 compute-0 ceph-mgr[75469]: [progress WARNING root] complete: ev mgr-vol-total-clones does not exist
Feb 01 15:18:26 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] finished removing progress bars from "ceph status" output
Feb 01 15:18:26 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.stats_util] marking this RTimer thread as finished; thread object ID - <volumes.fs.stats_util.CloneProgressReporter object at 0x7f82797d15e0>
Feb 01 15:18:26 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:18:26 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:18:26 compute-0 ceph-mon[75179]: osdmap e156: 3 total, 3 up, 3 in
Feb 01 15:18:26 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:18:27 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.viosrg(active, since 28m)
Feb 01 15:18:27 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cc1a1612-4970-46ec-aefe-db2d1c0f8688", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:18:27 compute-0 ceph-mon[75179]: pgmap v991: 305 pgs: 305 active+clean; 61 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 920 B/s rd, 122 KiB/s wr, 12 op/s
Feb 01 15:18:27 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cc1a1612-4970-46ec-aefe-db2d1c0f8688", "format": "json"}]: dispatch
Feb 01 15:18:27 compute-0 ceph-mon[75179]: mgrmap e16: compute-0.viosrg(active, since 28m)
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659702529057695 of space, bias 1.0, pg target 0.19979107587173084 quantized to 32 (current 32)
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00032473327732104813 of space, bias 4.0, pg target 0.38967993278525775 quantized to 16 (current 16)
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 5.087256625643029e-07 of space, bias 1.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 61 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 122 KiB/s wr, 12 op/s
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [progress INFO root] Writing back 19 completed events
Feb 01 15:18:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 01 15:18:28 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:18:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:18:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb 01 15:18:28 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:18:28 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:18:29 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:18:29 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:18:29 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:18:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:18:29 compute-0 ceph-mon[75179]: pgmap v992: 305 pgs: 305 active+clean; 61 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 122 KiB/s wr, 12 op/s
Feb 01 15:18:29 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:18:29 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:18:29 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:18:29 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:18:29 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:18:29 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "cc1a1612-4970-46ec-aefe-db2d1c0f8688", "snap_name": "b86e68d4-3845-4b37-bc61-babe728af73e", "format": "json"}]: dispatch
Feb 01 15:18:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b86e68d4-3845-4b37-bc61-babe728af73e, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb 01 15:18:29 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b86e68d4-3845-4b37-bc61-babe728af73e, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb 01 15:18:30 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 61 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 122 KiB/s wr, 12 op/s
Feb 01 15:18:30 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "cc1a1612-4970-46ec-aefe-db2d1c0f8688", "snap_name": "b86e68d4-3845-4b37-bc61-babe728af73e", "format": "json"}]: dispatch
Feb 01 15:18:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:18:31 compute-0 ceph-mon[75179]: pgmap v993: 305 pgs: 305 active+clean; 61 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 122 KiB/s wr, 12 op/s
Feb 01 15:18:32 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 61 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 131 KiB/s wr, 14 op/s
Feb 01 15:18:32 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:18:32 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:32 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb 01 15:18:32 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:18:32 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Feb 01 15:18:32 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 01 15:18:32 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 01 15:18:32 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:32 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:18:32 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:32 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:18:32 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:18:32 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:18:32 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:32 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:18:32 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 01 15:18:32 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 01 15:18:32 compute-0 podman[247931]: 2026-02-01 15:18:32.975165014 +0000 UTC m=+0.062106293 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 01 15:18:32 compute-0 podman[247932]: 2026-02-01 15:18:32.994932206 +0000 UTC m=+0.085655000 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller)
Feb 01 15:18:33 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:18:33.620 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:64:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '76:0c:13:64:99:39'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 01 15:18:33 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:18:33.623 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 01 15:18:33 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cc1a1612-4970-46ec-aefe-db2d1c0f8688", "snap_name": "b86e68d4-3845-4b37-bc61-babe728af73e_7fcd119c-0e43-4007-8ec8-3d4fbb59c309", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:33 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b86e68d4-3845-4b37-bc61-babe728af73e_7fcd119c-0e43-4007-8ec8-3d4fbb59c309, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb 01 15:18:33 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cc1a1612-4970-46ec-aefe-db2d1c0f8688/.meta.tmp'
Feb 01 15:18:33 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cc1a1612-4970-46ec-aefe-db2d1c0f8688/.meta.tmp' to config b'/volumes/_nogroup/cc1a1612-4970-46ec-aefe-db2d1c0f8688/.meta'
Feb 01 15:18:33 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b86e68d4-3845-4b37-bc61-babe728af73e_7fcd119c-0e43-4007-8ec8-3d4fbb59c309, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb 01 15:18:33 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cc1a1612-4970-46ec-aefe-db2d1c0f8688", "snap_name": "b86e68d4-3845-4b37-bc61-babe728af73e", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:33 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b86e68d4-3845-4b37-bc61-babe728af73e, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb 01 15:18:33 compute-0 ceph-mon[75179]: pgmap v994: 305 pgs: 305 active+clean; 61 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 131 KiB/s wr, 14 op/s
Feb 01 15:18:33 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:18:33 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:18:34 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cc1a1612-4970-46ec-aefe-db2d1c0f8688/.meta.tmp'
Feb 01 15:18:34 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cc1a1612-4970-46ec-aefe-db2d1c0f8688/.meta.tmp' to config b'/volumes/_nogroup/cc1a1612-4970-46ec-aefe-db2d1c0f8688/.meta'
Feb 01 15:18:34 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b86e68d4-3845-4b37-bc61-babe728af73e, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb 01 15:18:34 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 61 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 131 KiB/s wr, 14 op/s
Feb 01 15:18:35 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cc1a1612-4970-46ec-aefe-db2d1c0f8688", "snap_name": "b86e68d4-3845-4b37-bc61-babe728af73e_7fcd119c-0e43-4007-8ec8-3d4fbb59c309", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:35 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cc1a1612-4970-46ec-aefe-db2d1c0f8688", "snap_name": "b86e68d4-3845-4b37-bc61-babe728af73e", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:18:36 compute-0 ceph-mon[75179]: pgmap v995: 305 pgs: 305 active+clean; 61 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 131 KiB/s wr, 14 op/s
Feb 01 15:18:36 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:18:36 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:18:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb 01 15:18:36 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:18:36 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice_bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:18:36 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 700 B/s rd, 82 KiB/s wr, 9 op/s
Feb 01 15:18:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:18:36 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:18:36 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:18:37 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:18:37 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cc1a1612-4970-46ec-aefe-db2d1c0f8688", "format": "json"}]: dispatch
Feb 01 15:18:37 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:18:37 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:18:37 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cc1a1612-4970-46ec-aefe-db2d1c0f8688' of type subvolume
Feb 01 15:18:37 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:37.323+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cc1a1612-4970-46ec-aefe-db2d1c0f8688' of type subvolume
Feb 01 15:18:37 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cc1a1612-4970-46ec-aefe-db2d1c0f8688", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:37 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb 01 15:18:37 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cc1a1612-4970-46ec-aefe-db2d1c0f8688'' moved to trashcan
Feb 01 15:18:37 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:18:37 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cc1a1612-4970-46ec-aefe-db2d1c0f8688, vol_name:cephfs) < ""
Feb 01 15:18:37 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:18:37 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:18:37 compute-0 ceph-mon[75179]: pgmap v996: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 700 B/s rd, 82 KiB/s wr, 9 op/s
Feb 01 15:18:37 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:18:37 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:18:38 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 70 KiB/s wr, 8 op/s
Feb 01 15:18:38 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:18:38.626 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 01 15:18:38 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cc1a1612-4970-46ec-aefe-db2d1c0f8688", "format": "json"}]: dispatch
Feb 01 15:18:38 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cc1a1612-4970-46ec-aefe-db2d1c0f8688", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:39 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c38a331c-6d1f-4342-961a-602e5b4f62e5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:18:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c38a331c-6d1f-4342-961a-602e5b4f62e5, vol_name:cephfs) < ""
Feb 01 15:18:39 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c38a331c-6d1f-4342-961a-602e5b4f62e5/839cb248-0acc-449c-9f35-9972fc8e8c70'.
Feb 01 15:18:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c38a331c-6d1f-4342-961a-602e5b4f62e5/.meta.tmp'
Feb 01 15:18:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c38a331c-6d1f-4342-961a-602e5b4f62e5/.meta.tmp' to config b'/volumes/_nogroup/c38a331c-6d1f-4342-961a-602e5b4f62e5/.meta'
Feb 01 15:18:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c38a331c-6d1f-4342-961a-602e5b4f62e5, vol_name:cephfs) < ""
Feb 01 15:18:39 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c38a331c-6d1f-4342-961a-602e5b4f62e5", "format": "json"}]: dispatch
Feb 01 15:18:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c38a331c-6d1f-4342-961a-602e5b4f62e5, vol_name:cephfs) < ""
Feb 01 15:18:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c38a331c-6d1f-4342-961a-602e5b4f62e5, vol_name:cephfs) < ""
Feb 01 15:18:39 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:18:39 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:18:39 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:18:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb 01 15:18:40 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:18:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Feb 01 15:18:40 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 01 15:18:40 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 01 15:18:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:40 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:18:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:18:40 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:18:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:18:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Feb 01 15:18:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Feb 01 15:18:40 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Feb 01 15:18:40 compute-0 ceph-mon[75179]: pgmap v997: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 70 KiB/s wr, 8 op/s
Feb 01 15:18:40 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:18:40 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:18:40 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 01 15:18:40 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 01 15:18:40 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 83 KiB/s wr, 9 op/s
Feb 01 15:18:40 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:18:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb 01 15:18:40 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565/e581cce0-6e5d-4f0c-9f72-b6f802b6db39'.
Feb 01 15:18:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565/.meta.tmp'
Feb 01 15:18:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565/.meta.tmp' to config b'/volumes/_nogroup/b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565/.meta'
Feb 01 15:18:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb 01 15:18:40 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565", "format": "json"}]: dispatch
Feb 01 15:18:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb 01 15:18:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb 01 15:18:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:18:40 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:18:41 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c38a331c-6d1f-4342-961a-602e5b4f62e5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:18:41 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c38a331c-6d1f-4342-961a-602e5b4f62e5", "format": "json"}]: dispatch
Feb 01 15:18:41 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:18:41 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:18:41 compute-0 ceph-mon[75179]: osdmap e157: 3 total, 3 up, 3 in
Feb 01 15:18:41 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:18:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:18:42 compute-0 ceph-mon[75179]: pgmap v999: 305 pgs: 305 active+clean; 62 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 83 KiB/s wr, 9 op/s
Feb 01 15:18:42 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:18:42 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565", "format": "json"}]: dispatch
Feb 01 15:18:42 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 62 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 114 KiB/s wr, 10 op/s
Feb 01 15:18:43 compute-0 ceph-mon[75179]: pgmap v1000: 305 pgs: 305 active+clean; 62 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 114 KiB/s wr, 10 op/s
Feb 01 15:18:43 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:18:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:18:43 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb 01 15:18:43 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:18:43 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice_bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:18:43 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:18:43 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:18:43 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:18:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:18:43 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c38a331c-6d1f-4342-961a-602e5b4f62e5", "format": "json"}]: dispatch
Feb 01 15:18:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c38a331c-6d1f-4342-961a-602e5b4f62e5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:18:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c38a331c-6d1f-4342-961a-602e5b4f62e5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:18:43 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:43.833+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c38a331c-6d1f-4342-961a-602e5b4f62e5' of type subvolume
Feb 01 15:18:43 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c38a331c-6d1f-4342-961a-602e5b4f62e5' of type subvolume
Feb 01 15:18:43 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c38a331c-6d1f-4342-961a-602e5b4f62e5", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c38a331c-6d1f-4342-961a-602e5b4f62e5, vol_name:cephfs) < ""
Feb 01 15:18:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c38a331c-6d1f-4342-961a-602e5b4f62e5'' moved to trashcan
Feb 01 15:18:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:18:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c38a331c-6d1f-4342-961a-602e5b4f62e5, vol_name:cephfs) < ""
Feb 01 15:18:44 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565", "snap_name": "c21d430b-3b4d-4d2f-8c15-58fdd24843b4", "format": "json"}]: dispatch
Feb 01 15:18:44 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c21d430b-3b4d-4d2f-8c15-58fdd24843b4, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb 01 15:18:44 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c21d430b-3b4d-4d2f-8c15-58fdd24843b4, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb 01 15:18:44 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:18:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:18:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:18:44 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:18:44 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c38a331c-6d1f-4342-961a-602e5b4f62e5", "format": "json"}]: dispatch
Feb 01 15:18:44 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c38a331c-6d1f-4342-961a-602e5b4f62e5", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:44 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 62 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 114 KiB/s wr, 10 op/s
Feb 01 15:18:45 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565", "snap_name": "c21d430b-3b4d-4d2f-8c15-58fdd24843b4", "format": "json"}]: dispatch
Feb 01 15:18:45 compute-0 ceph-mon[75179]: pgmap v1001: 305 pgs: 305 active+clean; 62 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 114 KiB/s wr, 10 op/s
Feb 01 15:18:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:18:46 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 135 KiB/s wr, 12 op/s
Feb 01 15:18:46 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:18:46 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9, vol_name:cephfs) < ""
Feb 01 15:18:46 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9/68e0a21d-a250-4452-852e-a3bef2850322'.
Feb 01 15:18:46 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9/.meta.tmp'
Feb 01 15:18:46 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9/.meta.tmp' to config b'/volumes/_nogroup/f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9/.meta'
Feb 01 15:18:46 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9, vol_name:cephfs) < ""
Feb 01 15:18:46 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9", "format": "json"}]: dispatch
Feb 01 15:18:46 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9, vol_name:cephfs) < ""
Feb 01 15:18:46 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9, vol_name:cephfs) < ""
Feb 01 15:18:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:18:46 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:18:47 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:18:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb 01 15:18:47 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:18:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Feb 01 15:18:47 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 01 15:18:47 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 01 15:18:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:47 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:18:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:18:47 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:18:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:18:47 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:47 compute-0 ceph-mon[75179]: pgmap v1002: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 135 KiB/s wr, 12 op/s
Feb 01 15:18:47 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:18:47 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9", "format": "json"}]: dispatch
Feb 01 15:18:47 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:18:47 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:18:47 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:18:47 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 01 15:18:47 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 01 15:18:47 compute-0 nova_compute[238794]: 2026-02-01 15:18:47.608 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:18:47 compute-0 nova_compute[238794]: 2026-02-01 15:18:47.608 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 01 15:18:47 compute-0 nova_compute[238794]: 2026-02-01 15:18:47.608 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 01 15:18:47 compute-0 nova_compute[238794]: 2026-02-01 15:18:47.622 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 01 15:18:47 compute-0 nova_compute[238794]: 2026-02-01 15:18:47.622 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:18:48 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565", "snap_name": "c21d430b-3b4d-4d2f-8c15-58fdd24843b4_c0b5a8ad-609c-4622-bb29-29375e2fdb31", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c21d430b-3b4d-4d2f-8c15-58fdd24843b4_c0b5a8ad-609c-4622-bb29-29375e2fdb31, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb 01 15:18:48 compute-0 nova_compute[238794]: 2026-02-01 15:18:48.329 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:18:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565/.meta.tmp'
Feb 01 15:18:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565/.meta.tmp' to config b'/volumes/_nogroup/b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565/.meta'
Feb 01 15:18:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c21d430b-3b4d-4d2f-8c15-58fdd24843b4_c0b5a8ad-609c-4622-bb29-29375e2fdb31, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb 01 15:18:48 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565", "snap_name": "c21d430b-3b4d-4d2f-8c15-58fdd24843b4", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c21d430b-3b4d-4d2f-8c15-58fdd24843b4, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb 01 15:18:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565/.meta.tmp'
Feb 01 15:18:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565/.meta.tmp' to config b'/volumes/_nogroup/b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565/.meta'
Feb 01 15:18:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c21d430b-3b4d-4d2f-8c15-58fdd24843b4, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb 01 15:18:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 135 KiB/s wr, 12 op/s
Feb 01 15:18:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:18:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:18:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:18:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:18:48 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:18:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:18:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:18:49 compute-0 nova_compute[238794]: 2026-02-01 15:18:49.335 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:18:49 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565", "snap_name": "c21d430b-3b4d-4d2f-8c15-58fdd24843b4_c0b5a8ad-609c-4622-bb29-29375e2fdb31", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:49 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565", "snap_name": "c21d430b-3b4d-4d2f-8c15-58fdd24843b4", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:49 compute-0 ceph-mon[75179]: pgmap v1003: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 135 KiB/s wr, 12 op/s
Feb 01 15:18:50 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 131 KiB/s wr, 11 op/s
Feb 01 15:18:50 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:18:50 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:18:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb 01 15:18:50 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:18:50 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:18:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:18:50 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:18:50 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:18:50 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:18:50 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:18:50 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:18:50 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:18:50 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9", "format": "json"}]: dispatch
Feb 01 15:18:50 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:18:50 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:18:50 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:50.905+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9' of type subvolume
Feb 01 15:18:50 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9' of type subvolume
Feb 01 15:18:50 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:50 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9, vol_name:cephfs) < ""
Feb 01 15:18:50 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9'' moved to trashcan
Feb 01 15:18:50 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:18:50 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9, vol_name:cephfs) < ""
Feb 01 15:18:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 01 15:18:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4008296022' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:18:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 01 15:18:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4008296022' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:18:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:18:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Feb 01 15:18:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Feb 01 15:18:51 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Feb 01 15:18:51 compute-0 nova_compute[238794]: 2026-02-01 15:18:51.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:18:51 compute-0 nova_compute[238794]: 2026-02-01 15:18:51.319 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 01 15:18:51 compute-0 ceph-mon[75179]: pgmap v1004: 305 pgs: 305 active+clean; 63 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 131 KiB/s wr, 11 op/s
Feb 01 15:18:51 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:18:51 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9", "format": "json"}]: dispatch
Feb 01 15:18:51 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f7b904c4-a5f8-4c39-a831-93f4c8a9a0a9", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/4008296022' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:18:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/4008296022' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:18:51 compute-0 ceph-mon[75179]: osdmap e158: 3 total, 3 up, 3 in
Feb 01 15:18:51 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565", "format": "json"}]: dispatch
Feb 01 15:18:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:18:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:18:51 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:18:51.706+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565' of type subvolume
Feb 01 15:18:51 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565' of type subvolume
Feb 01 15:18:51 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb 01 15:18:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565'' moved to trashcan
Feb 01 15:18:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:18:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565, vol_name:cephfs) < ""
Feb 01 15:18:52 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Feb 01 15:18:52 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Feb 01 15:18:52 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Feb 01 15:18:52 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 177 KiB/s wr, 15 op/s
Feb 01 15:18:52 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565", "format": "json"}]: dispatch
Feb 01 15:18:52 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b77cdfcb-ad1c-46a1-b6bd-1d432fc4f565", "force": true, "format": "json"}]: dispatch
Feb 01 15:18:52 compute-0 ceph-mon[75179]: osdmap e159: 3 total, 3 up, 3 in
Feb 01 15:18:53 compute-0 nova_compute[238794]: 2026-02-01 15:18:53.321 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:18:53 compute-0 ceph-mon[75179]: pgmap v1007: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 177 KiB/s wr, 15 op/s
Feb 01 15:18:54 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:18:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb 01 15:18:54 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:18:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Feb 01 15:18:54 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 01 15:18:54 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 01 15:18:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:54 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:18:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:18:54 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:18:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:18:54 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:18:54 compute-0 nova_compute[238794]: 2026-02-01 15:18:54.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:18:54 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 102 KiB/s wr, 9 op/s
Feb 01 15:18:54 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:18:54 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:18:54 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 01 15:18:54 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 01 15:18:55 compute-0 nova_compute[238794]: 2026-02-01 15:18:55.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:18:55 compute-0 nova_compute[238794]: 2026-02-01 15:18:55.321 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:18:56 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:18:56 compute-0 ceph-mon[75179]: pgmap v1008: 305 pgs: 305 active+clean; 64 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 102 KiB/s wr, 9 op/s
Feb 01 15:18:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:18:56 compute-0 nova_compute[238794]: 2026-02-01 15:18:56.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:18:56 compute-0 nova_compute[238794]: 2026-02-01 15:18:56.350 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:18:56 compute-0 nova_compute[238794]: 2026-02-01 15:18:56.351 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:18:56 compute-0 nova_compute[238794]: 2026-02-01 15:18:56.351 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:18:56 compute-0 nova_compute[238794]: 2026-02-01 15:18:56.351 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 01 15:18:56 compute-0 nova_compute[238794]: 2026-02-01 15:18:56.352 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:18:56 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 64 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 164 KiB/s wr, 15 op/s
Feb 01 15:18:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:18:56 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2155576865' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:18:56 compute-0 nova_compute[238794]: 2026-02-01 15:18:56.910 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:18:57 compute-0 nova_compute[238794]: 2026-02-01 15:18:57.140 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 01 15:18:57 compute-0 nova_compute[238794]: 2026-02-01 15:18:57.142 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5077MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 01 15:18:57 compute-0 nova_compute[238794]: 2026-02-01 15:18:57.143 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:18:57 compute-0 nova_compute[238794]: 2026-02-01 15:18:57.143 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:18:57 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2155576865' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:18:57 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e91ca10f-a5ab-4efe-a6b7-448ed904538e", "format": "json"}]: dispatch
Feb 01 15:18:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:18:57 compute-0 nova_compute[238794]: 2026-02-01 15:18:57.228 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 01 15:18:57 compute-0 nova_compute[238794]: 2026-02-01 15:18:57.228 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 01 15:18:57 compute-0 nova_compute[238794]: 2026-02-01 15:18:57.242 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:18:57 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:18:57 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2031007255' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:18:57 compute-0 nova_compute[238794]: 2026-02-01 15:18:57.842 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:18:57 compute-0 nova_compute[238794]: 2026-02-01 15:18:57.848 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 01 15:18:57 compute-0 nova_compute[238794]: 2026-02-01 15:18:57.861 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 01 15:18:57 compute-0 nova_compute[238794]: 2026-02-01 15:18:57.863 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 01 15:18:57 compute-0 sudo[248021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:18:57 compute-0 nova_compute[238794]: 2026-02-01 15:18:57.863 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:18:57 compute-0 sudo[248021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:18:57 compute-0 sudo[248021]: pam_unix(sudo:session): session closed for user root
Feb 01 15:18:57 compute-0 sudo[248048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 15:18:57 compute-0 sudo[248048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:18:58 compute-0 ceph-mon[75179]: pgmap v1009: 305 pgs: 305 active+clean; 64 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 164 KiB/s wr, 15 op/s
Feb 01 15:18:58 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e91ca10f-a5ab-4efe-a6b7-448ed904538e", "format": "json"}]: dispatch
Feb 01 15:18:58 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2031007255' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:18:58 compute-0 sudo[248048]: pam_unix(sudo:session): session closed for user root
Feb 01 15:18:58 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:18:58 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:18:58 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 15:18:58 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:18:58 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 15:18:58 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:18:58 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 15:18:58 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:18:58 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 15:18:58 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:18:58 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:18:58 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:18:58 compute-0 sudo[248104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:18:58 compute-0 sudo[248104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:18:58 compute-0 sudo[248104]: pam_unix(sudo:session): session closed for user root
Feb 01 15:18:58 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 64 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 164 KiB/s wr, 15 op/s
Feb 01 15:18:58 compute-0 sudo[248129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 15:18:58 compute-0 sudo[248129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:18:58 compute-0 podman[248166]: 2026-02-01 15:18:58.769412431 +0000 UTC m=+0.050576631 container create 164ce79561eab2a98d6232d19cec3acef98d74ec50eee2ac93cb92a1e6dfb4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:18:58 compute-0 systemd[1]: Started libpod-conmon-164ce79561eab2a98d6232d19cec3acef98d74ec50eee2ac93cb92a1e6dfb4e5.scope.
Feb 01 15:18:58 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:18:58 compute-0 podman[248166]: 2026-02-01 15:18:58.74712121 +0000 UTC m=+0.028285450 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:18:58 compute-0 podman[248166]: 2026-02-01 15:18:58.846697977 +0000 UTC m=+0.127862177 container init 164ce79561eab2a98d6232d19cec3acef98d74ec50eee2ac93cb92a1e6dfb4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:18:58 compute-0 podman[248166]: 2026-02-01 15:18:58.852798547 +0000 UTC m=+0.133962777 container start 164ce79561eab2a98d6232d19cec3acef98d74ec50eee2ac93cb92a1e6dfb4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:18:58 compute-0 podman[248166]: 2026-02-01 15:18:58.85686162 +0000 UTC m=+0.138025820 container attach 164ce79561eab2a98d6232d19cec3acef98d74ec50eee2ac93cb92a1e6dfb4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:18:58 compute-0 keen_chaplygin[248182]: 167 167
Feb 01 15:18:58 compute-0 systemd[1]: libpod-164ce79561eab2a98d6232d19cec3acef98d74ec50eee2ac93cb92a1e6dfb4e5.scope: Deactivated successfully.
Feb 01 15:18:58 compute-0 podman[248166]: 2026-02-01 15:18:58.858614239 +0000 UTC m=+0.139778439 container died 164ce79561eab2a98d6232d19cec3acef98d74ec50eee2ac93cb92a1e6dfb4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:18:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-71521779ea9230345375d974fb643ce612c41a936eb794d8137170e9b00ab5a3-merged.mount: Deactivated successfully.
Feb 01 15:18:58 compute-0 podman[248166]: 2026-02-01 15:18:58.900259191 +0000 UTC m=+0.181423401 container remove 164ce79561eab2a98d6232d19cec3acef98d74ec50eee2ac93cb92a1e6dfb4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_chaplygin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 01 15:18:58 compute-0 systemd[1]: libpod-conmon-164ce79561eab2a98d6232d19cec3acef98d74ec50eee2ac93cb92a1e6dfb4e5.scope: Deactivated successfully.
Feb 01 15:18:59 compute-0 podman[248206]: 2026-02-01 15:18:59.054078191 +0000 UTC m=+0.037442426 container create 410fd3ab197ecbacae190bb7d157e5ce1c38ca93c51a2cf1c4fcd725aae06dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 01 15:18:59 compute-0 systemd[1]: Started libpod-conmon-410fd3ab197ecbacae190bb7d157e5ce1c38ca93c51a2cf1c4fcd725aae06dd1.scope.
Feb 01 15:18:59 compute-0 podman[248206]: 2026-02-01 15:18:59.037278012 +0000 UTC m=+0.020642237 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:18:59 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:18:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5e813b933d9298805aebee90a9663e26635eff55c96b16c22e196ffa04577d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:18:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5e813b933d9298805aebee90a9663e26635eff55c96b16c22e196ffa04577d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:18:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5e813b933d9298805aebee90a9663e26635eff55c96b16c22e196ffa04577d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:18:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5e813b933d9298805aebee90a9663e26635eff55c96b16c22e196ffa04577d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:18:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5e813b933d9298805aebee90a9663e26635eff55c96b16c22e196ffa04577d6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 15:18:59 compute-0 podman[248206]: 2026-02-01 15:18:59.168125321 +0000 UTC m=+0.151489626 container init 410fd3ab197ecbacae190bb7d157e5ce1c38ca93c51a2cf1c4fcd725aae06dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cerf, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 01 15:18:59 compute-0 podman[248206]: 2026-02-01 15:18:59.180778254 +0000 UTC m=+0.164142459 container start 410fd3ab197ecbacae190bb7d157e5ce1c38ca93c51a2cf1c4fcd725aae06dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 01 15:18:59 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:18:59 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:18:59 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:18:59 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:18:59 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:18:59 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:18:59 compute-0 podman[248206]: 2026-02-01 15:18:59.184714794 +0000 UTC m=+0.168079029 container attach 410fd3ab197ecbacae190bb7d157e5ce1c38ca93c51a2cf1c4fcd725aae06dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cerf, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 01 15:18:59 compute-0 admiring_cerf[248222]: --> passed data devices: 0 physical, 3 LVM
Feb 01 15:18:59 compute-0 admiring_cerf[248222]: --> All data devices are unavailable
Feb 01 15:18:59 compute-0 systemd[1]: libpod-410fd3ab197ecbacae190bb7d157e5ce1c38ca93c51a2cf1c4fcd725aae06dd1.scope: Deactivated successfully.
Feb 01 15:18:59 compute-0 podman[248206]: 2026-02-01 15:18:59.65929086 +0000 UTC m=+0.642655075 container died 410fd3ab197ecbacae190bb7d157e5ce1c38ca93c51a2cf1c4fcd725aae06dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:18:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5e813b933d9298805aebee90a9663e26635eff55c96b16c22e196ffa04577d6-merged.mount: Deactivated successfully.
Feb 01 15:18:59 compute-0 podman[248206]: 2026-02-01 15:18:59.706513457 +0000 UTC m=+0.689877672 container remove 410fd3ab197ecbacae190bb7d157e5ce1c38ca93c51a2cf1c4fcd725aae06dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cerf, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:18:59 compute-0 systemd[1]: libpod-conmon-410fd3ab197ecbacae190bb7d157e5ce1c38ca93c51a2cf1c4fcd725aae06dd1.scope: Deactivated successfully.
Feb 01 15:18:59 compute-0 sudo[248129]: pam_unix(sudo:session): session closed for user root
Feb 01 15:18:59 compute-0 sudo[248255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:18:59 compute-0 sudo[248255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:18:59 compute-0 sudo[248255]: pam_unix(sudo:session): session closed for user root
Feb 01 15:18:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:18:59 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e91ca10f-a5ab-4efe-a6b7-448ed904538e", "format": "json"}]: dispatch
Feb 01 15:18:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, vol_name:cephfs) < ""
Feb 01 15:18:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, vol_name:cephfs) < ""
Feb 01 15:18:59 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:18:59 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:18:59 compute-0 sudo[248280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 15:18:59 compute-0 sudo[248280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:19:00 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:19:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:19:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb 01 15:19:00 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:19:00 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:19:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:19:00 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:19:00 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:19:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:19:00 compute-0 podman[248318]: 2026-02-01 15:19:00.15012411 +0000 UTC m=+0.051938500 container create 26481b27da1673ef4fa2d1b4847408e221fc2b5f1c963c1784fcdd1d1bc44a59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:19:00 compute-0 ceph-mon[75179]: pgmap v1010: 305 pgs: 305 active+clean; 64 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 164 KiB/s wr, 15 op/s
Feb 01 15:19:00 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e91ca10f-a5ab-4efe-a6b7-448ed904538e", "format": "json"}]: dispatch
Feb 01 15:19:00 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:19:00 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:19:00 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:19:00 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:19:00 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:19:00 compute-0 systemd[1]: Started libpod-conmon-26481b27da1673ef4fa2d1b4847408e221fc2b5f1c963c1784fcdd1d1bc44a59.scope.
Feb 01 15:19:00 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:19:00 compute-0 podman[248318]: 2026-02-01 15:19:00.126397218 +0000 UTC m=+0.028211658 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:19:00 compute-0 podman[248318]: 2026-02-01 15:19:00.229333349 +0000 UTC m=+0.131147749 container init 26481b27da1673ef4fa2d1b4847408e221fc2b5f1c963c1784fcdd1d1bc44a59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 01 15:19:00 compute-0 podman[248318]: 2026-02-01 15:19:00.235532962 +0000 UTC m=+0.137347372 container start 26481b27da1673ef4fa2d1b4847408e221fc2b5f1c963c1784fcdd1d1bc44a59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:19:00 compute-0 podman[248318]: 2026-02-01 15:19:00.239400549 +0000 UTC m=+0.141214929 container attach 26481b27da1673ef4fa2d1b4847408e221fc2b5f1c963c1784fcdd1d1bc44a59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:19:00 compute-0 jovial_faraday[248334]: 167 167
Feb 01 15:19:00 compute-0 systemd[1]: libpod-26481b27da1673ef4fa2d1b4847408e221fc2b5f1c963c1784fcdd1d1bc44a59.scope: Deactivated successfully.
Feb 01 15:19:00 compute-0 conmon[248334]: conmon 26481b27da1673ef4fa2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-26481b27da1673ef4fa2d1b4847408e221fc2b5f1c963c1784fcdd1d1bc44a59.scope/container/memory.events
Feb 01 15:19:00 compute-0 podman[248318]: 2026-02-01 15:19:00.242916708 +0000 UTC m=+0.144731098 container died 26481b27da1673ef4fa2d1b4847408e221fc2b5f1c963c1784fcdd1d1bc44a59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:19:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-da4017325c4110425e46470ba3a32c2132b196333c7896affb487cfab659f4be-merged.mount: Deactivated successfully.
Feb 01 15:19:00 compute-0 podman[248318]: 2026-02-01 15:19:00.280499376 +0000 UTC m=+0.182313786 container remove 26481b27da1673ef4fa2d1b4847408e221fc2b5f1c963c1784fcdd1d1bc44a59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 01 15:19:00 compute-0 systemd[1]: libpod-conmon-26481b27da1673ef4fa2d1b4847408e221fc2b5f1c963c1784fcdd1d1bc44a59.scope: Deactivated successfully.
Feb 01 15:19:00 compute-0 podman[248357]: 2026-02-01 15:19:00.454937481 +0000 UTC m=+0.049731068 container create 1800f9a38e51e8d950f0c4ca6f4670f501c29a19333600ced24e2b355a54c6b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_yalow, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:19:00 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 64 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 335 B/s rd, 54 KiB/s wr, 5 op/s
Feb 01 15:19:00 compute-0 systemd[1]: Started libpod-conmon-1800f9a38e51e8d950f0c4ca6f4670f501c29a19333600ced24e2b355a54c6b7.scope.
Feb 01 15:19:00 compute-0 podman[248357]: 2026-02-01 15:19:00.428893984 +0000 UTC m=+0.023687621 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:19:00 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5583fd4196955b34f4e7bb25402400d199040aa8d582c9f74de718a126c91fc2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5583fd4196955b34f4e7bb25402400d199040aa8d582c9f74de718a126c91fc2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5583fd4196955b34f4e7bb25402400d199040aa8d582c9f74de718a126c91fc2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5583fd4196955b34f4e7bb25402400d199040aa8d582c9f74de718a126c91fc2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:19:00 compute-0 podman[248357]: 2026-02-01 15:19:00.576587664 +0000 UTC m=+0.171381301 container init 1800f9a38e51e8d950f0c4ca6f4670f501c29a19333600ced24e2b355a54c6b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:19:00 compute-0 podman[248357]: 2026-02-01 15:19:00.583080225 +0000 UTC m=+0.177873812 container start 1800f9a38e51e8d950f0c4ca6f4670f501c29a19333600ced24e2b355a54c6b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_yalow, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 01 15:19:00 compute-0 podman[248357]: 2026-02-01 15:19:00.586730937 +0000 UTC m=+0.181524534 container attach 1800f9a38e51e8d950f0c4ca6f4670f501c29a19333600ced24e2b355a54c6b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:19:00 compute-0 admiring_yalow[248374]: {
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:     "0": [
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:         {
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "devices": [
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "/dev/loop3"
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             ],
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "lv_name": "ceph_lv0",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "lv_size": "21470642176",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "name": "ceph_lv0",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "tags": {
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.cluster_name": "ceph",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.crush_device_class": "",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.encrypted": "0",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.objectstore": "bluestore",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.osd_id": "0",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.type": "block",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.vdo": "0",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.with_tpm": "0"
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             },
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "type": "block",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "vg_name": "ceph_vg0"
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:         }
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:     ],
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:     "1": [
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:         {
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "devices": [
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "/dev/loop4"
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             ],
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "lv_name": "ceph_lv1",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "lv_size": "21470642176",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "name": "ceph_lv1",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "tags": {
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.cluster_name": "ceph",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.crush_device_class": "",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.encrypted": "0",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.objectstore": "bluestore",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.osd_id": "1",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.type": "block",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.vdo": "0",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.with_tpm": "0"
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             },
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "type": "block",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "vg_name": "ceph_vg1"
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:         }
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:     ],
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:     "2": [
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:         {
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "devices": [
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "/dev/loop5"
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             ],
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "lv_name": "ceph_lv2",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "lv_size": "21470642176",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "name": "ceph_lv2",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "tags": {
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.cluster_name": "ceph",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.crush_device_class": "",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.encrypted": "0",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.objectstore": "bluestore",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.osd_id": "2",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.type": "block",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.vdo": "0",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:                 "ceph.with_tpm": "0"
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             },
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "type": "block",
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:             "vg_name": "ceph_vg2"
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:         }
Feb 01 15:19:00 compute-0 admiring_yalow[248374]:     ]
Feb 01 15:19:00 compute-0 admiring_yalow[248374]: }
Feb 01 15:19:00 compute-0 systemd[1]: libpod-1800f9a38e51e8d950f0c4ca6f4670f501c29a19333600ced24e2b355a54c6b7.scope: Deactivated successfully.
Feb 01 15:19:00 compute-0 podman[248357]: 2026-02-01 15:19:00.916277068 +0000 UTC m=+0.511070645 container died 1800f9a38e51e8d950f0c4ca6f4670f501c29a19333600ced24e2b355a54c6b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 01 15:19:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-5583fd4196955b34f4e7bb25402400d199040aa8d582c9f74de718a126c91fc2-merged.mount: Deactivated successfully.
Feb 01 15:19:00 compute-0 podman[248357]: 2026-02-01 15:19:00.963413632 +0000 UTC m=+0.558207189 container remove 1800f9a38e51e8d950f0c4ca6f4670f501c29a19333600ced24e2b355a54c6b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_yalow, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:19:00 compute-0 systemd[1]: libpod-conmon-1800f9a38e51e8d950f0c4ca6f4670f501c29a19333600ced24e2b355a54c6b7.scope: Deactivated successfully.
Feb 01 15:19:01 compute-0 sudo[248280]: pam_unix(sudo:session): session closed for user root
Feb 01 15:19:01 compute-0 sudo[248395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:19:01 compute-0 sudo[248395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:19:01 compute-0 sudo[248395]: pam_unix(sudo:session): session closed for user root
Feb 01 15:19:01 compute-0 sudo[248420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 15:19:01 compute-0 sudo[248420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:19:01 compute-0 ceph-mon[75179]: pgmap v1011: 305 pgs: 305 active+clean; 64 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 335 B/s rd, 54 KiB/s wr, 5 op/s
Feb 01 15:19:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:19:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Feb 01 15:19:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Feb 01 15:19:01 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Feb 01 15:19:01 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:19:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb 01 15:19:01 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:19:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Feb 01 15:19:01 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 01 15:19:01 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 01 15:19:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:01 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:19:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:19:01 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:19:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:19:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:01 compute-0 podman[248456]: 2026-02-01 15:19:01.490060101 +0000 UTC m=+0.065202160 container create 37f0ca5def6e92a2f7ca2bfc95d19e0a410de9af1574ce25c557ec99b39d3a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_moore, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 01 15:19:01 compute-0 systemd[1]: Started libpod-conmon-37f0ca5def6e92a2f7ca2bfc95d19e0a410de9af1574ce25c557ec99b39d3a8a.scope.
Feb 01 15:19:01 compute-0 podman[248456]: 2026-02-01 15:19:01.463788658 +0000 UTC m=+0.038930777 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:19:01 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:19:01 compute-0 podman[248456]: 2026-02-01 15:19:01.58326016 +0000 UTC m=+0.158402219 container init 37f0ca5def6e92a2f7ca2bfc95d19e0a410de9af1574ce25c557ec99b39d3a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:19:01 compute-0 podman[248456]: 2026-02-01 15:19:01.59258843 +0000 UTC m=+0.167730459 container start 37f0ca5def6e92a2f7ca2bfc95d19e0a410de9af1574ce25c557ec99b39d3a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_moore, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 01 15:19:01 compute-0 podman[248456]: 2026-02-01 15:19:01.595724218 +0000 UTC m=+0.170866247 container attach 37f0ca5def6e92a2f7ca2bfc95d19e0a410de9af1574ce25c557ec99b39d3a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_moore, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 01 15:19:01 compute-0 tender_moore[248474]: 167 167
Feb 01 15:19:01 compute-0 systemd[1]: libpod-37f0ca5def6e92a2f7ca2bfc95d19e0a410de9af1574ce25c557ec99b39d3a8a.scope: Deactivated successfully.
Feb 01 15:19:01 compute-0 podman[248456]: 2026-02-01 15:19:01.597996851 +0000 UTC m=+0.173138910 container died 37f0ca5def6e92a2f7ca2bfc95d19e0a410de9af1574ce25c557ec99b39d3a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:19:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b1a5771207a9792079b9fbbdda377153bc1cac57725869e46bdaaa1f5fef6d4-merged.mount: Deactivated successfully.
Feb 01 15:19:01 compute-0 podman[248456]: 2026-02-01 15:19:01.650471683 +0000 UTC m=+0.225613742 container remove 37f0ca5def6e92a2f7ca2bfc95d19e0a410de9af1574ce25c557ec99b39d3a8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_moore, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:19:01 compute-0 systemd[1]: libpod-conmon-37f0ca5def6e92a2f7ca2bfc95d19e0a410de9af1574ce25c557ec99b39d3a8a.scope: Deactivated successfully.
Feb 01 15:19:01 compute-0 podman[248498]: 2026-02-01 15:19:01.839923187 +0000 UTC m=+0.056820075 container create 190462a71cdfec16ae1410b226f73a08fb86e232107612e6808e59a55c3a8493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 01 15:19:01 compute-0 systemd[1]: Started libpod-conmon-190462a71cdfec16ae1410b226f73a08fb86e232107612e6808e59a55c3a8493.scope.
Feb 01 15:19:01 compute-0 podman[248498]: 2026-02-01 15:19:01.817874032 +0000 UTC m=+0.034770920 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:19:01 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:19:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3072bebe0bc5f0eb061decf0121009c0a1677cc28ce5441c479ac64bb7454ba4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:19:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3072bebe0bc5f0eb061decf0121009c0a1677cc28ce5441c479ac64bb7454ba4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:19:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3072bebe0bc5f0eb061decf0121009c0a1677cc28ce5441c479ac64bb7454ba4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:19:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3072bebe0bc5f0eb061decf0121009c0a1677cc28ce5441c479ac64bb7454ba4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:19:01 compute-0 podman[248498]: 2026-02-01 15:19:01.948993389 +0000 UTC m=+0.165890277 container init 190462a71cdfec16ae1410b226f73a08fb86e232107612e6808e59a55c3a8493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 01 15:19:01 compute-0 podman[248498]: 2026-02-01 15:19:01.963509234 +0000 UTC m=+0.180406122 container start 190462a71cdfec16ae1410b226f73a08fb86e232107612e6808e59a55c3a8493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sinoussi, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 01 15:19:01 compute-0 podman[248498]: 2026-02-01 15:19:01.970515449 +0000 UTC m=+0.187412337 container attach 190462a71cdfec16ae1410b226f73a08fb86e232107612e6808e59a55c3a8493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sinoussi, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 01 15:19:02 compute-0 ceph-mon[75179]: osdmap e160: 3 total, 3 up, 3 in
Feb 01 15:19:02 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:19:02 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:19:02 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 01 15:19:02 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 01 15:19:02 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:19:02 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 93 KiB/s wr, 8 op/s
Feb 01 15:19:02 compute-0 lvm[248594]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:19:02 compute-0 lvm[248594]: VG ceph_vg1 finished
Feb 01 15:19:02 compute-0 lvm[248593]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:19:02 compute-0 lvm[248593]: VG ceph_vg0 finished
Feb 01 15:19:02 compute-0 lvm[248596]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:19:02 compute-0 lvm[248596]: VG ceph_vg2 finished
Feb 01 15:19:02 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d707ecfe-f6ee-49fe-a02c-3c565e379dff", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:19:02 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d707ecfe-f6ee-49fe-a02c-3c565e379dff, vol_name:cephfs) < ""
Feb 01 15:19:02 compute-0 competent_sinoussi[248515]: {}
Feb 01 15:19:02 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/d707ecfe-f6ee-49fe-a02c-3c565e379dff/ddd00d22-d077-475e-a668-ba7be553860a'.
Feb 01 15:19:02 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d707ecfe-f6ee-49fe-a02c-3c565e379dff/.meta.tmp'
Feb 01 15:19:02 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d707ecfe-f6ee-49fe-a02c-3c565e379dff/.meta.tmp' to config b'/volumes/_nogroup/d707ecfe-f6ee-49fe-a02c-3c565e379dff/.meta'
Feb 01 15:19:02 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d707ecfe-f6ee-49fe-a02c-3c565e379dff, vol_name:cephfs) < ""
Feb 01 15:19:02 compute-0 systemd[1]: libpod-190462a71cdfec16ae1410b226f73a08fb86e232107612e6808e59a55c3a8493.scope: Deactivated successfully.
Feb 01 15:19:02 compute-0 systemd[1]: libpod-190462a71cdfec16ae1410b226f73a08fb86e232107612e6808e59a55c3a8493.scope: Consumed 1.362s CPU time.
Feb 01 15:19:02 compute-0 podman[248498]: 2026-02-01 15:19:02.837279494 +0000 UTC m=+1.054176382 container died 190462a71cdfec16ae1410b226f73a08fb86e232107612e6808e59a55c3a8493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sinoussi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 01 15:19:02 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d707ecfe-f6ee-49fe-a02c-3c565e379dff", "format": "json"}]: dispatch
Feb 01 15:19:02 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d707ecfe-f6ee-49fe-a02c-3c565e379dff, vol_name:cephfs) < ""
Feb 01 15:19:02 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d707ecfe-f6ee-49fe-a02c-3c565e379dff, vol_name:cephfs) < ""
Feb 01 15:19:02 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:19:02 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:19:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-3072bebe0bc5f0eb061decf0121009c0a1677cc28ce5441c479ac64bb7454ba4-merged.mount: Deactivated successfully.
Feb 01 15:19:02 compute-0 podman[248498]: 2026-02-01 15:19:02.8744216 +0000 UTC m=+1.091318448 container remove 190462a71cdfec16ae1410b226f73a08fb86e232107612e6808e59a55c3a8493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sinoussi, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 01 15:19:02 compute-0 systemd[1]: libpod-conmon-190462a71cdfec16ae1410b226f73a08fb86e232107612e6808e59a55c3a8493.scope: Deactivated successfully.
Feb 01 15:19:02 compute-0 sudo[248420]: pam_unix(sudo:session): session closed for user root
Feb 01 15:19:02 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:19:02 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:19:02 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:19:02 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:19:03 compute-0 sudo[248612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 15:19:03 compute-0 sudo[248612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:19:03 compute-0 sudo[248612]: pam_unix(sudo:session): session closed for user root
Feb 01 15:19:03 compute-0 podman[248636]: 2026-02-01 15:19:03.13580939 +0000 UTC m=+0.117625532 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 01 15:19:03 compute-0 podman[248637]: 2026-02-01 15:19:03.156178088 +0000 UTC m=+0.137541837 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 01 15:19:03 compute-0 ceph-mon[75179]: pgmap v1013: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 93 KiB/s wr, 8 op/s
Feb 01 15:19:03 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d707ecfe-f6ee-49fe-a02c-3c565e379dff", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:19:03 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d707ecfe-f6ee-49fe-a02c-3c565e379dff", "format": "json"}]: dispatch
Feb 01 15:19:03 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:19:03 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:19:03 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:19:03 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5f371bcf-0672-4b5f-9567-1fcaf6940905", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:19:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5f371bcf-0672-4b5f-9567-1fcaf6940905, vol_name:cephfs) < ""
Feb 01 15:19:03 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/5f371bcf-0672-4b5f-9567-1fcaf6940905/ca3ae955-cb00-4008-bc9e-6ebd7fc60edf'.
Feb 01 15:19:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5f371bcf-0672-4b5f-9567-1fcaf6940905/.meta.tmp'
Feb 01 15:19:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5f371bcf-0672-4b5f-9567-1fcaf6940905/.meta.tmp' to config b'/volumes/_nogroup/5f371bcf-0672-4b5f-9567-1fcaf6940905/.meta'
Feb 01 15:19:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5f371bcf-0672-4b5f-9567-1fcaf6940905, vol_name:cephfs) < ""
Feb 01 15:19:03 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5f371bcf-0672-4b5f-9567-1fcaf6940905", "format": "json"}]: dispatch
Feb 01 15:19:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5f371bcf-0672-4b5f-9567-1fcaf6940905, vol_name:cephfs) < ""
Feb 01 15:19:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5f371bcf-0672-4b5f-9567-1fcaf6940905, vol_name:cephfs) < ""
Feb 01 15:19:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:19:03 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:19:04 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5f371bcf-0672-4b5f-9567-1fcaf6940905", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:19:04 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5f371bcf-0672-4b5f-9567-1fcaf6940905", "format": "json"}]: dispatch
Feb 01 15:19:04 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:19:04 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 93 KiB/s wr, 8 op/s
Feb 01 15:19:04 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:19:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:19:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb 01 15:19:04 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:19:04 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:19:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:19:04 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:19:04 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:19:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:19:05 compute-0 ceph-mon[75179]: pgmap v1014: 305 pgs: 305 active+clean; 65 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 93 KiB/s wr, 8 op/s
Feb 01 15:19:05 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:19:05 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:19:05 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:19:05 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:19:06 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "d707ecfe-f6ee-49fe-a02c-3c565e379dff", "new_size": 2147483648, "format": "json"}]: dispatch
Feb 01 15:19:06 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:d707ecfe-f6ee-49fe-a02c-3c565e379dff, vol_name:cephfs) < ""
Feb 01 15:19:06 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:d707ecfe-f6ee-49fe-a02c-3c565e379dff, vol_name:cephfs) < ""
Feb 01 15:19:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:19:06 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 65 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 111 KiB/s wr, 9 op/s
Feb 01 15:19:07 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "d707ecfe-f6ee-49fe-a02c-3c565e379dff", "new_size": 2147483648, "format": "json"}]: dispatch
Feb 01 15:19:07 compute-0 ceph-mon[75179]: pgmap v1015: 305 pgs: 305 active+clean; 65 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 111 KiB/s wr, 9 op/s
Feb 01 15:19:07 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7b1b736a-26a1-4658-8b8f-779a2b222e80", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:19:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7b1b736a-26a1-4658-8b8f-779a2b222e80, vol_name:cephfs) < ""
Feb 01 15:19:07 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/7b1b736a-26a1-4658-8b8f-779a2b222e80/0af9b831-6215-4111-ba2a-47cc2086c878'.
Feb 01 15:19:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7b1b736a-26a1-4658-8b8f-779a2b222e80/.meta.tmp'
Feb 01 15:19:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7b1b736a-26a1-4658-8b8f-779a2b222e80/.meta.tmp' to config b'/volumes/_nogroup/7b1b736a-26a1-4658-8b8f-779a2b222e80/.meta'
Feb 01 15:19:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7b1b736a-26a1-4658-8b8f-779a2b222e80, vol_name:cephfs) < ""
Feb 01 15:19:07 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7b1b736a-26a1-4658-8b8f-779a2b222e80", "format": "json"}]: dispatch
Feb 01 15:19:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7b1b736a-26a1-4658-8b8f-779a2b222e80, vol_name:cephfs) < ""
Feb 01 15:19:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7b1b736a-26a1-4658-8b8f-779a2b222e80, vol_name:cephfs) < ""
Feb 01 15:19:07 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:19:07 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:19:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:19:07.814 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:19:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:19:07.815 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:19:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:19:07.815 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:19:08 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:19:08 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:08 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb 01 15:19:08 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:19:08 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Feb 01 15:19:08 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 01 15:19:08 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 01 15:19:08 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:08 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:19:08 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:08 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:19:08 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:19:08 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:19:08 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:08 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 65 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 111 KiB/s wr, 9 op/s
Feb 01 15:19:08 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7b1b736a-26a1-4658-8b8f-779a2b222e80", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:19:08 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7b1b736a-26a1-4658-8b8f-779a2b222e80", "format": "json"}]: dispatch
Feb 01 15:19:08 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:19:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:19:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 01 15:19:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 01 15:19:09 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:19:09 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:19:09 compute-0 ceph-mon[75179]: pgmap v1016: 305 pgs: 305 active+clean; 65 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 111 KiB/s wr, 9 op/s
Feb 01 15:19:09 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d707ecfe-f6ee-49fe-a02c-3c565e379dff", "format": "json"}]: dispatch
Feb 01 15:19:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d707ecfe-f6ee-49fe-a02c-3c565e379dff, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:19:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d707ecfe-f6ee-49fe-a02c-3c565e379dff, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:19:09 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:19:09.872+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd707ecfe-f6ee-49fe-a02c-3c565e379dff' of type subvolume
Feb 01 15:19:09 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd707ecfe-f6ee-49fe-a02c-3c565e379dff' of type subvolume
Feb 01 15:19:09 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d707ecfe-f6ee-49fe-a02c-3c565e379dff", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d707ecfe-f6ee-49fe-a02c-3c565e379dff, vol_name:cephfs) < ""
Feb 01 15:19:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d707ecfe-f6ee-49fe-a02c-3c565e379dff'' moved to trashcan
Feb 01 15:19:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:19:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d707ecfe-f6ee-49fe-a02c-3c565e379dff, vol_name:cephfs) < ""
Feb 01 15:19:10 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 65 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 111 KiB/s wr, 9 op/s
Feb 01 15:19:10 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d707ecfe-f6ee-49fe-a02c-3c565e379dff", "format": "json"}]: dispatch
Feb 01 15:19:10 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d707ecfe-f6ee-49fe-a02c-3c565e379dff", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:19:11 compute-0 ceph-mon[75179]: pgmap v1017: 305 pgs: 305 active+clean; 65 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 111 KiB/s wr, 9 op/s
Feb 01 15:19:11 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:19:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:19:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb 01 15:19:11 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:19:11 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:19:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:19:11 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:19:11 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:19:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:19:11 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d57288c1-6475-4afc-b89b-63e0397aa3d5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:19:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d57288c1-6475-4afc-b89b-63e0397aa3d5, vol_name:cephfs) < ""
Feb 01 15:19:11 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/d57288c1-6475-4afc-b89b-63e0397aa3d5/24b4e50d-218b-41bb-b9dd-f25fddccd8d7'.
Feb 01 15:19:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d57288c1-6475-4afc-b89b-63e0397aa3d5/.meta.tmp'
Feb 01 15:19:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d57288c1-6475-4afc-b89b-63e0397aa3d5/.meta.tmp' to config b'/volumes/_nogroup/d57288c1-6475-4afc-b89b-63e0397aa3d5/.meta'
Feb 01 15:19:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d57288c1-6475-4afc-b89b-63e0397aa3d5, vol_name:cephfs) < ""
Feb 01 15:19:11 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d57288c1-6475-4afc-b89b-63e0397aa3d5", "format": "json"}]: dispatch
Feb 01 15:19:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d57288c1-6475-4afc-b89b-63e0397aa3d5, vol_name:cephfs) < ""
Feb 01 15:19:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d57288c1-6475-4afc-b89b-63e0397aa3d5, vol_name:cephfs) < ""
Feb 01 15:19:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:19:11 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:19:12 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 91 B/s rd, 118 KiB/s wr, 8 op/s
Feb 01 15:19:12 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:19:12 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:19:12 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:19:12 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:19:12 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d57288c1-6475-4afc-b89b-63e0397aa3d5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:19:12 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d57288c1-6475-4afc-b89b-63e0397aa3d5", "format": "json"}]: dispatch
Feb 01 15:19:12 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:19:13 compute-0 ceph-mon[75179]: pgmap v1018: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 91 B/s rd, 118 KiB/s wr, 8 op/s
Feb 01 15:19:14 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 110 KiB/s wr, 8 op/s
Feb 01 15:19:15 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:19:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Feb 01 15:19:15 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:19:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Feb 01 15:19:15 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 01 15:19:15 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 01 15:19:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:15 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:19:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:19:15 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:19:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:19:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:15 compute-0 ceph-mon[75179]: pgmap v1019: 305 pgs: 305 active+clean; 66 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 110 KiB/s wr, 8 op/s
Feb 01 15:19:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 01 15:19:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 01 15:19:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 01 15:19:15 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7bb072d4-78e4-494f-ab70-eb9c366fac63", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:19:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7bb072d4-78e4-494f-ab70-eb9c366fac63, vol_name:cephfs) < ""
Feb 01 15:19:15 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/7bb072d4-78e4-494f-ab70-eb9c366fac63/3168f927-e301-452a-884c-a434cfe97158'.
Feb 01 15:19:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7bb072d4-78e4-494f-ab70-eb9c366fac63/.meta.tmp'
Feb 01 15:19:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7bb072d4-78e4-494f-ab70-eb9c366fac63/.meta.tmp' to config b'/volumes/_nogroup/7bb072d4-78e4-494f-ab70-eb9c366fac63/.meta'
Feb 01 15:19:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7bb072d4-78e4-494f-ab70-eb9c366fac63, vol_name:cephfs) < ""
Feb 01 15:19:15 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7bb072d4-78e4-494f-ab70-eb9c366fac63", "format": "json"}]: dispatch
Feb 01 15:19:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7bb072d4-78e4-494f-ab70-eb9c366fac63, vol_name:cephfs) < ""
Feb 01 15:19:15 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7bb072d4-78e4-494f-ab70-eb9c366fac63, vol_name:cephfs) < ""
Feb 01 15:19:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:19:15 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:19:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:19:16 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 160 KiB/s wr, 13 op/s
Feb 01 15:19:16 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:19:16 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice", "format": "json"}]: dispatch
Feb 01 15:19:16 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7bb072d4-78e4-494f-ab70-eb9c366fac63", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:19:16 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7bb072d4-78e4-494f-ab70-eb9c366fac63", "format": "json"}]: dispatch
Feb 01 15:19:16 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:19:17 compute-0 ceph-mon[75179]: pgmap v1020: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 160 KiB/s wr, 13 op/s
Feb 01 15:19:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:19:17
Feb 01 15:19:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:19:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:19:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.log', 'images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', '.rgw.root']
Feb 01 15:19:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "db1d1fea-0e00-4e6b-b733-ef0fe090c2f5", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:db1d1fea-0e00-4e6b-b733-ef0fe090c2f5, vol_name:cephfs) < ""
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/db1d1fea-0e00-4e6b-b733-ef0fe090c2f5/10b4830f-ffdf-472e-bb12-472493dd5549'.
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/db1d1fea-0e00-4e6b-b733-ef0fe090c2f5/.meta.tmp'
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/db1d1fea-0e00-4e6b-b733-ef0fe090c2f5/.meta.tmp' to config b'/volumes/_nogroup/db1d1fea-0e00-4e6b-b733-ef0fe090c2f5/.meta'
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:db1d1fea-0e00-4e6b-b733-ef0fe090c2f5, vol_name:cephfs) < ""
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "db1d1fea-0e00-4e6b-b733-ef0fe090c2f5", "format": "json"}]: dispatch
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:db1d1fea-0e00-4e6b-b733-ef0fe090c2f5, vol_name:cephfs) < ""
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:db1d1fea-0e00-4e6b-b733-ef0fe090c2f5, vol_name:cephfs) < ""
Feb 01 15:19:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:19:18 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 104 KiB/s wr, 8 op/s
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:19:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb 01 15:19:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:19:18 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice_bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:19:18 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "db1d1fea-0e00-4e6b-b733-ef0fe090c2f5", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:19:18 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "db1d1fea-0e00-4e6b-b733-ef0fe090c2f5", "format": "json"}]: dispatch
Feb 01 15:19:18 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:19:18 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:19:18 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:19:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:19:18 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:19:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:19:19 compute-0 ceph-mon[75179]: pgmap v1021: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 104 KiB/s wr, 8 op/s
Feb 01 15:19:19 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:19:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:19:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:19:20 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 104 KiB/s wr, 8 op/s
Feb 01 15:19:20 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7bb072d4-78e4-494f-ab70-eb9c366fac63", "format": "json"}]: dispatch
Feb 01 15:19:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7bb072d4-78e4-494f-ab70-eb9c366fac63, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:19:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7bb072d4-78e4-494f-ab70-eb9c366fac63, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:19:20 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:19:20.619+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7bb072d4-78e4-494f-ab70-eb9c366fac63' of type subvolume
Feb 01 15:19:20 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7bb072d4-78e4-494f-ab70-eb9c366fac63' of type subvolume
Feb 01 15:19:20 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7bb072d4-78e4-494f-ab70-eb9c366fac63", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7bb072d4-78e4-494f-ab70-eb9c366fac63, vol_name:cephfs) < ""
Feb 01 15:19:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7bb072d4-78e4-494f-ab70-eb9c366fac63'' moved to trashcan
Feb 01 15:19:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:19:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7bb072d4-78e4-494f-ab70-eb9c366fac63, vol_name:cephfs) < ""
Feb 01 15:19:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:19:21 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "db1d1fea-0e00-4e6b-b733-ef0fe090c2f5", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Feb 01 15:19:21 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:db1d1fea-0e00-4e6b-b733-ef0fe090c2f5, vol_name:cephfs) < ""
Feb 01 15:19:21 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:db1d1fea-0e00-4e6b-b733-ef0fe090c2f5, vol_name:cephfs) < ""
Feb 01 15:19:21 compute-0 ceph-mon[75179]: pgmap v1022: 305 pgs: 305 active+clean; 67 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 104 KiB/s wr, 8 op/s
Feb 01 15:19:21 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7bb072d4-78e4-494f-ab70-eb9c366fac63", "format": "json"}]: dispatch
Feb 01 15:19:21 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7bb072d4-78e4-494f-ab70-eb9c366fac63", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:22 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:19:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb 01 15:19:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:19:22 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Feb 01 15:19:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 01 15:19:22 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 01 15:19:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:22 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:19:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:19:22 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:19:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:19:22 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:22 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 67 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 171 KiB/s wr, 13 op/s
Feb 01 15:19:22 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "db1d1fea-0e00-4e6b-b733-ef0fe090c2f5", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Feb 01 15:19:22 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:19:22 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:19:22 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 01 15:19:22 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 01 15:19:23 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:19:23 compute-0 ceph-mon[75179]: pgmap v1023: 305 pgs: 305 active+clean; 67 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 171 KiB/s wr, 13 op/s
Feb 01 15:19:24 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d57288c1-6475-4afc-b89b-63e0397aa3d5", "format": "json"}]: dispatch
Feb 01 15:19:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d57288c1-6475-4afc-b89b-63e0397aa3d5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:19:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d57288c1-6475-4afc-b89b-63e0397aa3d5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:19:24 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd57288c1-6475-4afc-b89b-63e0397aa3d5' of type subvolume
Feb 01 15:19:24 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:19:24.197+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd57288c1-6475-4afc-b89b-63e0397aa3d5' of type subvolume
Feb 01 15:19:24 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d57288c1-6475-4afc-b89b-63e0397aa3d5", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d57288c1-6475-4afc-b89b-63e0397aa3d5, vol_name:cephfs) < ""
Feb 01 15:19:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d57288c1-6475-4afc-b89b-63e0397aa3d5'' moved to trashcan
Feb 01 15:19:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:19:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d57288c1-6475-4afc-b89b-63e0397aa3d5, vol_name:cephfs) < ""
Feb 01 15:19:24 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 67 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 117 KiB/s wr, 9 op/s
Feb 01 15:19:24 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "db1d1fea-0e00-4e6b-b733-ef0fe090c2f5", "format": "json"}]: dispatch
Feb 01 15:19:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:db1d1fea-0e00-4e6b-b733-ef0fe090c2f5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:19:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:db1d1fea-0e00-4e6b-b733-ef0fe090c2f5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:19:24 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:19:24.938+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'db1d1fea-0e00-4e6b-b733-ef0fe090c2f5' of type subvolume
Feb 01 15:19:24 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'db1d1fea-0e00-4e6b-b733-ef0fe090c2f5' of type subvolume
Feb 01 15:19:24 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "db1d1fea-0e00-4e6b-b733-ef0fe090c2f5", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:db1d1fea-0e00-4e6b-b733-ef0fe090c2f5, vol_name:cephfs) < ""
Feb 01 15:19:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/db1d1fea-0e00-4e6b-b733-ef0fe090c2f5'' moved to trashcan
Feb 01 15:19:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:19:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:db1d1fea-0e00-4e6b-b733-ef0fe090c2f5, vol_name:cephfs) < ""
Feb 01 15:19:25 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:19:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:19:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb 01 15:19:25 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:19:25 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice_bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:19:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:19:25 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:19:25 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:19:25 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:19:25 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d57288c1-6475-4afc-b89b-63e0397aa3d5", "format": "json"}]: dispatch
Feb 01 15:19:25 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d57288c1-6475-4afc-b89b-63e0397aa3d5", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:25 compute-0 ceph-mon[75179]: pgmap v1024: 305 pgs: 305 active+clean; 67 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 117 KiB/s wr, 9 op/s
Feb 01 15:19:25 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "db1d1fea-0e00-4e6b-b733-ef0fe090c2f5", "format": "json"}]: dispatch
Feb 01 15:19:25 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "db1d1fea-0e00-4e6b-b733-ef0fe090c2f5", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:25 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:19:25 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:19:25 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:19:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:19:26 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 175 KiB/s wr, 15 op/s
Feb 01 15:19:26 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:19:27 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7b1b736a-26a1-4658-8b8f-779a2b222e80", "format": "json"}]: dispatch
Feb 01 15:19:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7b1b736a-26a1-4658-8b8f-779a2b222e80, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:19:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7b1b736a-26a1-4658-8b8f-779a2b222e80, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:19:27 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7b1b736a-26a1-4658-8b8f-779a2b222e80' of type subvolume
Feb 01 15:19:27 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:19:27.670+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7b1b736a-26a1-4658-8b8f-779a2b222e80' of type subvolume
Feb 01 15:19:27 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7b1b736a-26a1-4658-8b8f-779a2b222e80", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7b1b736a-26a1-4658-8b8f-779a2b222e80, vol_name:cephfs) < ""
Feb 01 15:19:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7b1b736a-26a1-4658-8b8f-779a2b222e80'' moved to trashcan
Feb 01 15:19:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:19:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7b1b736a-26a1-4658-8b8f-779a2b222e80, vol_name:cephfs) < ""
Feb 01 15:19:27 compute-0 ceph-mon[75179]: pgmap v1025: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 175 KiB/s wr, 15 op/s
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "69f497f0-f1d5-405b-b865-e545c0627b3a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:69f497f0-f1d5-405b-b865-e545c0627b3a, vol_name:cephfs) < ""
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659708118084749 of space, bias 1.0, pg target 0.1997912435425425 quantized to 32 (current 32)
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0004374240069927442 of space, bias 4.0, pg target 0.5249088083912931 quantized to 16 (current 16)
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 6.359070782053787e-07 of space, bias 1.0, pg target 0.0001907721234616136 quantized to 32 (current 32)
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/69f497f0-f1d5-405b-b865-e545c0627b3a/a80aa19e-424e-4e1a-a7e9-653f5a86eda0'.
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/69f497f0-f1d5-405b-b865-e545c0627b3a/.meta.tmp'
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/69f497f0-f1d5-405b-b865-e545c0627b3a/.meta.tmp' to config b'/volumes/_nogroup/69f497f0-f1d5-405b-b865-e545c0627b3a/.meta'
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:69f497f0-f1d5-405b-b865-e545c0627b3a, vol_name:cephfs) < ""
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "69f497f0-f1d5-405b-b865-e545c0627b3a", "format": "json"}]: dispatch
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:69f497f0-f1d5-405b-b865-e545c0627b3a, vol_name:cephfs) < ""
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:69f497f0-f1d5-405b-b865-e545c0627b3a, vol_name:cephfs) < ""
Feb 01 15:19:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:19:28 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 125 KiB/s wr, 10 op/s
Feb 01 15:19:28 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7b1b736a-26a1-4658-8b8f-779a2b222e80", "format": "json"}]: dispatch
Feb 01 15:19:28 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7b1b736a-26a1-4658-8b8f-779a2b222e80", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:28 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Feb 01 15:19:28 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:19:28 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Feb 01 15:19:28 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 01 15:19:28 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:19:28 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:19:28 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:29 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "69f497f0-f1d5-405b-b865-e545c0627b3a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:19:29 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "69f497f0-f1d5-405b-b865-e545c0627b3a", "format": "json"}]: dispatch
Feb 01 15:19:29 compute-0 ceph-mon[75179]: pgmap v1026: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 125 KiB/s wr, 10 op/s
Feb 01 15:19:29 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:19:29 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 01 15:19:29 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 01 15:19:29 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 01 15:19:29 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 01 15:19:30 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 125 KiB/s wr, 10 op/s
Feb 01 15:19:31 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5f371bcf-0672-4b5f-9567-1fcaf6940905", "format": "json"}]: dispatch
Feb 01 15:19:31 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5f371bcf-0672-4b5f-9567-1fcaf6940905, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:19:31 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5f371bcf-0672-4b5f-9567-1fcaf6940905, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:19:31 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:19:31.167+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5f371bcf-0672-4b5f-9567-1fcaf6940905' of type subvolume
Feb 01 15:19:31 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5f371bcf-0672-4b5f-9567-1fcaf6940905' of type subvolume
Feb 01 15:19:31 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5f371bcf-0672-4b5f-9567-1fcaf6940905", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:31 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5f371bcf-0672-4b5f-9567-1fcaf6940905, vol_name:cephfs) < ""
Feb 01 15:19:31 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5f371bcf-0672-4b5f-9567-1fcaf6940905'' moved to trashcan
Feb 01 15:19:31 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:19:31 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5f371bcf-0672-4b5f-9567-1fcaf6940905, vol_name:cephfs) < ""
Feb 01 15:19:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:19:31 compute-0 ceph-mon[75179]: pgmap v1027: 305 pgs: 305 active+clean; 68 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 125 KiB/s wr, 10 op/s
Feb 01 15:19:31 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5f371bcf-0672-4b5f-9567-1fcaf6940905", "format": "json"}]: dispatch
Feb 01 15:19:31 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5f371bcf-0672-4b5f-9567-1fcaf6940905", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:32 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:19:32 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:19:32 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb 01 15:19:32 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:19:32 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:19:32 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:19:32 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:19:32 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:19:32 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:19:32 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 177 KiB/s wr, 15 op/s
Feb 01 15:19:32 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:19:32 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:19:32 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:19:33 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "69f497f0-f1d5-405b-b865-e545c0627b3a", "format": "json"}]: dispatch
Feb 01 15:19:33 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:69f497f0-f1d5-405b-b865-e545c0627b3a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:19:33 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:69f497f0-f1d5-405b-b865-e545c0627b3a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:19:33 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:19:33.324+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '69f497f0-f1d5-405b-b865-e545c0627b3a' of type subvolume
Feb 01 15:19:33 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '69f497f0-f1d5-405b-b865-e545c0627b3a' of type subvolume
Feb 01 15:19:33 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "69f497f0-f1d5-405b-b865-e545c0627b3a", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:33 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:69f497f0-f1d5-405b-b865-e545c0627b3a, vol_name:cephfs) < ""
Feb 01 15:19:33 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/69f497f0-f1d5-405b-b865-e545c0627b3a'' moved to trashcan
Feb 01 15:19:33 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:19:33 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:69f497f0-f1d5-405b-b865-e545c0627b3a, vol_name:cephfs) < ""
Feb 01 15:19:33 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:19:33 compute-0 ceph-mon[75179]: pgmap v1028: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 177 KiB/s wr, 15 op/s
Feb 01 15:19:33 compute-0 podman[248687]: 2026-02-01 15:19:33.969991419 +0000 UTC m=+0.055113488 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb 01 15:19:34 compute-0 podman[248688]: 2026-02-01 15:19:34.000101139 +0000 UTC m=+0.083812528 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller)
Feb 01 15:19:34 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 110 KiB/s wr, 10 op/s
Feb 01 15:19:34 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "69f497f0-f1d5-405b-b865-e545c0627b3a", "format": "json"}]: dispatch
Feb 01 15:19:34 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "69f497f0-f1d5-405b-b865-e545c0627b3a", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:35 compute-0 ceph-mon[75179]: pgmap v1029: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 110 KiB/s wr, 10 op/s
Feb 01 15:19:35 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:19:35.854 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:64:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '76:0c:13:64:99:39'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 01 15:19:35 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:19:35.855 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 01 15:19:35 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:19:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb 01 15:19:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:19:35 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Feb 01 15:19:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 01 15:19:35 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 01 15:19:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:35 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:19:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:19:35 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:19:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:19:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:19:36 compute-0 sshd-session[248732]: Connection closed by 170.64.196.59 port 41304
Feb 01 15:19:36 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 152 KiB/s wr, 15 op/s
Feb 01 15:19:36 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:19:36 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:19:36 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 01 15:19:36 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 01 15:19:36 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:19:36 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e91ca10f-a5ab-4efe-a6b7-448ed904538e", "format": "json"}]: dispatch
Feb 01 15:19:36 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:19:36 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:19:36.857 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 01 15:19:36 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:19:36 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e91ca10f-a5ab-4efe-a6b7-448ed904538e", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:36 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, vol_name:cephfs) < ""
Feb 01 15:19:36 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e91ca10f-a5ab-4efe-a6b7-448ed904538e'' moved to trashcan
Feb 01 15:19:36 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:19:36 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e91ca10f-a5ab-4efe-a6b7-448ed904538e, vol_name:cephfs) < ""
Feb 01 15:19:37 compute-0 ceph-mon[75179]: pgmap v1030: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 152 KiB/s wr, 15 op/s
Feb 01 15:19:37 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e91ca10f-a5ab-4efe-a6b7-448ed904538e", "format": "json"}]: dispatch
Feb 01 15:19:37 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e91ca10f-a5ab-4efe-a6b7-448ed904538e", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:38 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 93 KiB/s wr, 9 op/s
Feb 01 15:19:38 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Feb 01 15:19:38 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:38.959980) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 01 15:19:38 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Feb 01 15:19:38 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959178960027, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2421, "num_deletes": 257, "total_data_size": 3036464, "memory_usage": 3079368, "flush_reason": "Manual Compaction"}
Feb 01 15:19:38 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Feb 01 15:19:38 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959178975366, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 2986253, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21479, "largest_seqno": 23899, "table_properties": {"data_size": 2975431, "index_size": 6612, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 26726, "raw_average_key_size": 21, "raw_value_size": 2952054, "raw_average_value_size": 2392, "num_data_blocks": 292, "num_entries": 1234, "num_filter_entries": 1234, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769959040, "oldest_key_time": 1769959040, "file_creation_time": 1769959178, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:19:38 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 15460 microseconds, and 8364 cpu microseconds.
Feb 01 15:19:38 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:19:38 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:38.975436) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 2986253 bytes OK
Feb 01 15:19:38 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:38.975462) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Feb 01 15:19:38 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:38.976958) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Feb 01 15:19:38 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:38.976981) EVENT_LOG_v1 {"time_micros": 1769959178976973, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 01 15:19:38 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:38.977006) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 01 15:19:38 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3025416, prev total WAL file size 3025416, number of live WAL files 2.
Feb 01 15:19:38 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:19:38 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:38.977844) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Feb 01 15:19:38 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 01 15:19:38 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(2916KB)], [50(7280KB)]
Feb 01 15:19:38 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959178977905, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10441944, "oldest_snapshot_seqno": -1}
Feb 01 15:19:39 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5446 keys, 8638825 bytes, temperature: kUnknown
Feb 01 15:19:39 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959179029928, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 8638825, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8600961, "index_size": 23162, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 134650, "raw_average_key_size": 24, "raw_value_size": 8501734, "raw_average_value_size": 1561, "num_data_blocks": 962, "num_entries": 5446, "num_filter_entries": 5446, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769959178, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:19:39 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:19:39 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:39.030235) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 8638825 bytes
Feb 01 15:19:39 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:39.031980) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.4 rd, 165.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 7.1 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 5979, records dropped: 533 output_compression: NoCompression
Feb 01 15:19:39 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:39.032008) EVENT_LOG_v1 {"time_micros": 1769959179031994, "job": 26, "event": "compaction_finished", "compaction_time_micros": 52114, "compaction_time_cpu_micros": 27779, "output_level": 6, "num_output_files": 1, "total_output_size": 8638825, "num_input_records": 5979, "num_output_records": 5446, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 01 15:19:39 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:19:39 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959179032609, "job": 26, "event": "table_file_deletion", "file_number": 52}
Feb 01 15:19:39 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:19:39 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959179034045, "job": 26, "event": "table_file_deletion", "file_number": 50}
Feb 01 15:19:39 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:38.977734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:19:39 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:39.034182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:19:39 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:39.034192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:19:39 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:39.034195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:19:39 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:39.034199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:19:39 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:19:39.034202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:19:39 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:19:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:19:39 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb 01 15:19:39 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:19:39 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID alice bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:19:39 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:19:39 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:19:39 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:19:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:19:39 compute-0 ceph-mon[75179]: pgmap v1031: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 93 KiB/s wr, 9 op/s
Feb 01 15:19:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:19:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:19:39 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:19:40 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "snap_name": "383b4f57-c12d-4143-bc64-f94b56aa4406_f84678b0-2860-4390-8392-13cdcac44563", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:383b4f57-c12d-4143-bc64-f94b56aa4406_f84678b0-2860-4390-8392-13cdcac44563, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb 01 15:19:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta.tmp'
Feb 01 15:19:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta.tmp' to config b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta'
Feb 01 15:19:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:383b4f57-c12d-4143-bc64-f94b56aa4406_f84678b0-2860-4390-8392-13cdcac44563, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb 01 15:19:40 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "snap_name": "383b4f57-c12d-4143-bc64-f94b56aa4406", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:383b4f57-c12d-4143-bc64-f94b56aa4406, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb 01 15:19:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta.tmp'
Feb 01 15:19:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta.tmp' to config b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572/.meta'
Feb 01 15:19:40 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:383b4f57-c12d-4143-bc64-f94b56aa4406, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb 01 15:19:40 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 93 KiB/s wr, 9 op/s
Feb 01 15:19:40 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "r", "format": "json"}]: dispatch
Feb 01 15:19:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:19:41 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "snap_name": "383b4f57-c12d-4143-bc64-f94b56aa4406_f84678b0-2860-4390-8392-13cdcac44563", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:41 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "snap_name": "383b4f57-c12d-4143-bc64-f94b56aa4406", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:41 compute-0 ceph-mon[75179]: pgmap v1032: 305 pgs: 305 active+clean; 69 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 93 KiB/s wr, 9 op/s
Feb 01 15:19:42 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 157 KiB/s wr, 15 op/s
Feb 01 15:19:43 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:19:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:43 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Feb 01 15:19:43 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:19:43 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Feb 01 15:19:43 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 01 15:19:43 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 01 15:19:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:43 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:19:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:19:43 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:19:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:19:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:19:43 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "format": "json"}]: dispatch
Feb 01 15:19:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:19:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:19:43 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:19:43.870+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e1bb4ab8-c449-4ad1-83d0-cba448059572' of type subvolume
Feb 01 15:19:43 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e1bb4ab8-c449-4ad1-83d0-cba448059572' of type subvolume
Feb 01 15:19:43 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb 01 15:19:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e1bb4ab8-c449-4ad1-83d0-cba448059572'' moved to trashcan
Feb 01 15:19:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:19:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e1bb4ab8-c449-4ad1-83d0-cba448059572, vol_name:cephfs) < ""
Feb 01 15:19:43 compute-0 ceph-mon[75179]: pgmap v1033: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 157 KiB/s wr, 15 op/s
Feb 01 15:19:43 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 01 15:19:43 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 01 15:19:43 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 01 15:19:44 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 105 KiB/s wr, 10 op/s
Feb 01 15:19:44 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Feb 01 15:19:44 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Feb 01 15:19:45 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Feb 01 15:19:45 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:19:45 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 01 15:19:45 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "format": "json"}]: dispatch
Feb 01 15:19:45 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e1bb4ab8-c449-4ad1-83d0-cba448059572", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:46 compute-0 ceph-mon[75179]: pgmap v1034: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 105 KiB/s wr, 10 op/s
Feb 01 15:19:46 compute-0 ceph-mon[75179]: osdmap e161: 3 total, 3 up, 3 in
Feb 01 15:19:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:19:46 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 110 KiB/s wr, 11 op/s
Feb 01 15:19:46 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:19:46 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:19:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Feb 01 15:19:46 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb 01 15:19:46 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: Creating meta for ID bob with tenant 7043f01f29d441a4801c5afbb65b54e3
Feb 01 15:19:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} v 0)
Feb 01 15:19:46 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:19:46 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:19:46 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:19:47 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb 01 15:19:47 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"} : dispatch
Feb 01 15:19:47 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "mon", "allow r"], "format": "json"}]': finished
Feb 01 15:19:48 compute-0 ceph-mon[75179]: pgmap v1036: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 110 KiB/s wr, 11 op/s
Feb 01 15:19:48 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:19:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 110 KiB/s wr, 11 op/s
Feb 01 15:19:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:19:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f825bec9940>)]
Feb 01 15:19:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb 01 15:19:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:19:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f825b5d27c0>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f825be5cdf0>)]
Feb 01 15:19:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb 01 15:19:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb 01 15:19:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:19:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:19:49 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.viosrg(active, since 29m)
Feb 01 15:19:49 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5b1172f7-abae-4452-a7de-df2b972dd4b6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:19:49 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5b1172f7-abae-4452-a7de-df2b972dd4b6, vol_name:cephfs) < ""
Feb 01 15:19:49 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/5b1172f7-abae-4452-a7de-df2b972dd4b6/a22da935-3d14-467d-800a-8fe6059d4763'.
Feb 01 15:19:49 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5b1172f7-abae-4452-a7de-df2b972dd4b6/.meta.tmp'
Feb 01 15:19:49 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5b1172f7-abae-4452-a7de-df2b972dd4b6/.meta.tmp' to config b'/volumes/_nogroup/5b1172f7-abae-4452-a7de-df2b972dd4b6/.meta'
Feb 01 15:19:49 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5b1172f7-abae-4452-a7de-df2b972dd4b6, vol_name:cephfs) < ""
Feb 01 15:19:49 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5b1172f7-abae-4452-a7de-df2b972dd4b6", "format": "json"}]: dispatch
Feb 01 15:19:49 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5b1172f7-abae-4452-a7de-df2b972dd4b6, vol_name:cephfs) < ""
Feb 01 15:19:49 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5b1172f7-abae-4452-a7de-df2b972dd4b6, vol_name:cephfs) < ""
Feb 01 15:19:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:19:49 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:19:49 compute-0 nova_compute[238794]: 2026-02-01 15:19:49.863 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:19:49 compute-0 nova_compute[238794]: 2026-02-01 15:19:49.864 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 01 15:19:49 compute-0 nova_compute[238794]: 2026-02-01 15:19:49.864 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 01 15:19:49 compute-0 nova_compute[238794]: 2026-02-01 15:19:49.881 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 01 15:19:49 compute-0 nova_compute[238794]: 2026-02-01 15:19:49.881 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:19:50 compute-0 ceph-mon[75179]: pgmap v1037: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 110 KiB/s wr, 11 op/s
Feb 01 15:19:50 compute-0 ceph-mon[75179]: mgrmap e17: compute-0.viosrg(active, since 29m)
Feb 01 15:19:50 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5b1172f7-abae-4452-a7de-df2b972dd4b6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:19:50 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:19:50 compute-0 nova_compute[238794]: 2026-02-01 15:19:50.333 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:19:50 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 110 KiB/s wr, 11 op/s
Feb 01 15:19:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 01 15:19:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2063902718' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:19:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 01 15:19:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2063902718' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:19:51 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5b1172f7-abae-4452-a7de-df2b972dd4b6", "format": "json"}]: dispatch
Feb 01 15:19:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/2063902718' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:19:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/2063902718' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:19:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:19:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Feb 01 15:19:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Feb 01 15:19:51 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Feb 01 15:19:51 compute-0 nova_compute[238794]: 2026-02-01 15:19:51.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:19:51 compute-0 nova_compute[238794]: 2026-02-01 15:19:51.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 01 15:19:51 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5f542ebc-7768-479b-a371-3e911afa4848", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:19:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5f542ebc-7768-479b-a371-3e911afa4848, vol_name:cephfs) < ""
Feb 01 15:19:51 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/5f542ebc-7768-479b-a371-3e911afa4848/88f8d8b2-1944-472d-8ceb-5fdb60ce8202'.
Feb 01 15:19:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5f542ebc-7768-479b-a371-3e911afa4848/.meta.tmp'
Feb 01 15:19:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5f542ebc-7768-479b-a371-3e911afa4848/.meta.tmp' to config b'/volumes/_nogroup/5f542ebc-7768-479b-a371-3e911afa4848/.meta'
Feb 01 15:19:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5f542ebc-7768-479b-a371-3e911afa4848, vol_name:cephfs) < ""
Feb 01 15:19:51 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5f542ebc-7768-479b-a371-3e911afa4848", "format": "json"}]: dispatch
Feb 01 15:19:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5f542ebc-7768-479b-a371-3e911afa4848, vol_name:cephfs) < ""
Feb 01 15:19:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5f542ebc-7768-479b-a371-3e911afa4848, vol_name:cephfs) < ""
Feb 01 15:19:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:19:51 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:19:52 compute-0 ceph-mon[75179]: pgmap v1038: 305 pgs: 305 active+clean; 70 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 110 KiB/s wr, 11 op/s
Feb 01 15:19:52 compute-0 ceph-mon[75179]: osdmap e162: 3 total, 3 up, 3 in
Feb 01 15:19:52 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:19:52 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 112 KiB/s wr, 11 op/s
Feb 01 15:19:52 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a8d162ed-6915-4c91-85d0-a5648c53b8d8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:19:52 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a8d162ed-6915-4c91-85d0-a5648c53b8d8, vol_name:cephfs) < ""
Feb 01 15:19:53 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/a8d162ed-6915-4c91-85d0-a5648c53b8d8/5b65efb9-bab8-427f-8dd7-fedcb50bea0f'.
Feb 01 15:19:53 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a8d162ed-6915-4c91-85d0-a5648c53b8d8/.meta.tmp'
Feb 01 15:19:53 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a8d162ed-6915-4c91-85d0-a5648c53b8d8/.meta.tmp' to config b'/volumes/_nogroup/a8d162ed-6915-4c91-85d0-a5648c53b8d8/.meta'
Feb 01 15:19:53 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a8d162ed-6915-4c91-85d0-a5648c53b8d8, vol_name:cephfs) < ""
Feb 01 15:19:53 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a8d162ed-6915-4c91-85d0-a5648c53b8d8", "format": "json"}]: dispatch
Feb 01 15:19:53 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a8d162ed-6915-4c91-85d0-a5648c53b8d8, vol_name:cephfs) < ""
Feb 01 15:19:53 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a8d162ed-6915-4c91-85d0-a5648c53b8d8, vol_name:cephfs) < ""
Feb 01 15:19:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:19:53 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:19:53 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5f542ebc-7768-479b-a371-3e911afa4848", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:19:53 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5f542ebc-7768-479b-a371-3e911afa4848", "format": "json"}]: dispatch
Feb 01 15:19:53 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:19:53 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a8d162ed-6915-4c91-85d0-a5648c53b8d8", "format": "json"}]: dispatch
Feb 01 15:19:53 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a8d162ed-6915-4c91-85d0-a5648c53b8d8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:19:53 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a8d162ed-6915-4c91-85d0-a5648c53b8d8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:19:53 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:19:53.593+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a8d162ed-6915-4c91-85d0-a5648c53b8d8' of type subvolume
Feb 01 15:19:53 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a8d162ed-6915-4c91-85d0-a5648c53b8d8' of type subvolume
Feb 01 15:19:53 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a8d162ed-6915-4c91-85d0-a5648c53b8d8", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:53 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a8d162ed-6915-4c91-85d0-a5648c53b8d8, vol_name:cephfs) < ""
Feb 01 15:19:53 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a8d162ed-6915-4c91-85d0-a5648c53b8d8'' moved to trashcan
Feb 01 15:19:53 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:19:53 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a8d162ed-6915-4c91-85d0-a5648c53b8d8, vol_name:cephfs) < ""
Feb 01 15:19:54 compute-0 ceph-mon[75179]: pgmap v1040: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 112 KiB/s wr, 11 op/s
Feb 01 15:19:54 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a8d162ed-6915-4c91-85d0-a5648c53b8d8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:19:54 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a8d162ed-6915-4c91-85d0-a5648c53b8d8", "format": "json"}]: dispatch
Feb 01 15:19:54 compute-0 nova_compute[238794]: 2026-02-01 15:19:54.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:19:54 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 539 B/s rd, 94 KiB/s wr, 9 op/s
Feb 01 15:19:55 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a8d162ed-6915-4c91-85d0-a5648c53b8d8", "format": "json"}]: dispatch
Feb 01 15:19:55 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a8d162ed-6915-4c91-85d0-a5648c53b8d8", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:55 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5f542ebc-7768-479b-a371-3e911afa4848", "auth_id": "bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:19:55 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:5f542ebc-7768-479b-a371-3e911afa4848, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:19:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Feb 01 15:19:55 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb 01 15:19:55 compute-0 nova_compute[238794]: 2026-02-01 15:19:55.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:19:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2,allow rw path=/volumes/_nogroup/5f542ebc-7768-479b-a371-3e911afa4848/88f8d8b2-1944-472d-8ceb-5fdb60ce8202", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_5f542ebc-7768-479b-a371-3e911afa4848"]} v 0)
Feb 01 15:19:55 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2,allow rw path=/volumes/_nogroup/5f542ebc-7768-479b-a371-3e911afa4848/88f8d8b2-1944-472d-8ceb-5fdb60ce8202", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_5f542ebc-7768-479b-a371-3e911afa4848"]} : dispatch
Feb 01 15:19:55 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2,allow rw path=/volumes/_nogroup/5f542ebc-7768-479b-a371-3e911afa4848/88f8d8b2-1944-472d-8ceb-5fdb60ce8202", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_5f542ebc-7768-479b-a371-3e911afa4848"]}]': finished
Feb 01 15:19:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Feb 01 15:19:55 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb 01 15:19:55 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:5f542ebc-7768-479b-a371-3e911afa4848, tenant_id:7043f01f29d441a4801c5afbb65b54e3, vol_name:cephfs) < ""
Feb 01 15:19:55 compute-0 sshd-session[248734]: Connection closed by authenticating user root 170.64.196.59 port 59622 [preauth]
Feb 01 15:19:56 compute-0 ceph-mon[75179]: pgmap v1041: 305 pgs: 305 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 539 B/s rd, 94 KiB/s wr, 9 op/s
Feb 01 15:19:56 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb 01 15:19:56 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2,allow rw path=/volumes/_nogroup/5f542ebc-7768-479b-a371-3e911afa4848/88f8d8b2-1944-472d-8ceb-5fdb60ce8202", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_5f542ebc-7768-479b-a371-3e911afa4848"]} : dispatch
Feb 01 15:19:56 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2,allow rw path=/volumes/_nogroup/5f542ebc-7768-479b-a371-3e911afa4848/88f8d8b2-1944-472d-8ceb-5fdb60ce8202", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_5f542ebc-7768-479b-a371-3e911afa4848"]}]': finished
Feb 01 15:19:56 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb 01 15:19:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:19:56 compute-0 nova_compute[238794]: 2026-02-01 15:19:56.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:19:56 compute-0 nova_compute[238794]: 2026-02-01 15:19:56.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:19:56 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 112 KiB/s wr, 9 op/s
Feb 01 15:19:56 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:19:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:19:56 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/aaba7f52-5353-40f7-aa14-6d95137a862b'.
Feb 01 15:19:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb 01 15:19:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb 01 15:19:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:19:56 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "format": "json"}]: dispatch
Feb 01 15:19:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:19:56 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:19:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:19:56 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:19:57 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5f542ebc-7768-479b-a371-3e911afa4848", "auth_id": "bob", "tenant_id": "7043f01f29d441a4801c5afbb65b54e3", "access_level": "rw", "format": "json"}]: dispatch
Feb 01 15:19:57 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:19:57 compute-0 nova_compute[238794]: 2026-02-01 15:19:57.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:19:57 compute-0 nova_compute[238794]: 2026-02-01 15:19:57.346 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:19:57 compute-0 nova_compute[238794]: 2026-02-01 15:19:57.347 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:19:57 compute-0 nova_compute[238794]: 2026-02-01 15:19:57.347 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:19:57 compute-0 nova_compute[238794]: 2026-02-01 15:19:57.348 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 01 15:19:57 compute-0 nova_compute[238794]: 2026-02-01 15:19:57.348 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:19:57 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5b1172f7-abae-4452-a7de-df2b972dd4b6", "format": "json"}]: dispatch
Feb 01 15:19:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5b1172f7-abae-4452-a7de-df2b972dd4b6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:19:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5b1172f7-abae-4452-a7de-df2b972dd4b6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:19:57 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:19:57.709+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5b1172f7-abae-4452-a7de-df2b972dd4b6' of type subvolume
Feb 01 15:19:57 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5b1172f7-abae-4452-a7de-df2b972dd4b6' of type subvolume
Feb 01 15:19:57 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5b1172f7-abae-4452-a7de-df2b972dd4b6", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5b1172f7-abae-4452-a7de-df2b972dd4b6, vol_name:cephfs) < ""
Feb 01 15:19:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5b1172f7-abae-4452-a7de-df2b972dd4b6'' moved to trashcan
Feb 01 15:19:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:19:57 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5b1172f7-abae-4452-a7de-df2b972dd4b6, vol_name:cephfs) < ""
Feb 01 15:19:57 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:19:57 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1847495700' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:19:57 compute-0 nova_compute[238794]: 2026-02-01 15:19:57.930 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:19:58 compute-0 nova_compute[238794]: 2026-02-01 15:19:58.056 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 01 15:19:58 compute-0 nova_compute[238794]: 2026-02-01 15:19:58.057 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5060MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 01 15:19:58 compute-0 nova_compute[238794]: 2026-02-01 15:19:58.058 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:19:58 compute-0 nova_compute[238794]: 2026-02-01 15:19:58.058 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:19:58 compute-0 ceph-mon[75179]: pgmap v1042: 305 pgs: 305 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 112 KiB/s wr, 9 op/s
Feb 01 15:19:58 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:19:58 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "format": "json"}]: dispatch
Feb 01 15:19:58 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1847495700' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:19:58 compute-0 nova_compute[238794]: 2026-02-01 15:19:58.136 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 01 15:19:58 compute-0 nova_compute[238794]: 2026-02-01 15:19:58.136 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 01 15:19:58 compute-0 nova_compute[238794]: 2026-02-01 15:19:58.164 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:19:58 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 112 KiB/s wr, 9 op/s
Feb 01 15:19:58 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:19:58 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/521019029' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:19:58 compute-0 nova_compute[238794]: 2026-02-01 15:19:58.647 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:19:58 compute-0 nova_compute[238794]: 2026-02-01 15:19:58.653 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 01 15:19:58 compute-0 nova_compute[238794]: 2026-02-01 15:19:58.670 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 01 15:19:58 compute-0 nova_compute[238794]: 2026-02-01 15:19:58.672 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 01 15:19:58 compute-0 nova_compute[238794]: 2026-02-01 15:19:58.672 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:19:58 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5f542ebc-7768-479b-a371-3e911afa4848", "auth_id": "bob", "format": "json"}]: dispatch
Feb 01 15:19:58 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:5f542ebc-7768-479b-a371-3e911afa4848, vol_name:cephfs) < ""
Feb 01 15:19:59 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Feb 01 15:19:59 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb 01 15:19:59 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280"]} v 0)
Feb 01 15:19:59 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280"]} : dispatch
Feb 01 15:19:59 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280"]}]': finished
Feb 01 15:19:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:5f542ebc-7768-479b-a371-3e911afa4848, vol_name:cephfs) < ""
Feb 01 15:19:59 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5f542ebc-7768-479b-a371-3e911afa4848", "auth_id": "bob", "format": "json"}]: dispatch
Feb 01 15:19:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:5f542ebc-7768-479b-a371-3e911afa4848, vol_name:cephfs) < ""
Feb 01 15:19:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/5f542ebc-7768-479b-a371-3e911afa4848/88f8d8b2-1944-472d-8ceb-5fdb60ce8202
Feb 01 15:19:59 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/5f542ebc-7768-479b-a371-3e911afa4848/88f8d8b2-1944-472d-8ceb-5fdb60ce8202],prefix=session evict} (starting...)
Feb 01 15:19:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:19:59 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:5f542ebc-7768-479b-a371-3e911afa4848, vol_name:cephfs) < ""
Feb 01 15:19:59 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5b1172f7-abae-4452-a7de-df2b972dd4b6", "format": "json"}]: dispatch
Feb 01 15:19:59 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5b1172f7-abae-4452-a7de-df2b972dd4b6", "force": true, "format": "json"}]: dispatch
Feb 01 15:19:59 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/521019029' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:19:59 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb 01 15:19:59 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280"]} : dispatch
Feb 01 15:19:59 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5365cf8-68f4-4bb7-b1f2-7a560b4f3280"]}]': finished
Feb 01 15:20:00 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "57052b66-4ef2-422d-b6cb-d8da260acde1", "format": "json"}]: dispatch
Feb 01 15:20:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:57052b66-4ef2-422d-b6cb-d8da260acde1, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:00 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:57052b66-4ef2-422d-b6cb-d8da260acde1, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:00 compute-0 ceph-mon[75179]: pgmap v1043: 305 pgs: 305 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 112 KiB/s wr, 9 op/s
Feb 01 15:20:00 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5f542ebc-7768-479b-a371-3e911afa4848", "auth_id": "bob", "format": "json"}]: dispatch
Feb 01 15:20:00 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5f542ebc-7768-479b-a371-3e911afa4848", "auth_id": "bob", "format": "json"}]: dispatch
Feb 01 15:20:00 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 15:20:00 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5241 writes, 24K keys, 5241 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 5241 writes, 5241 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1856 writes, 8970 keys, 1856 commit groups, 1.0 writes per commit group, ingest: 11.37 MB, 0.02 MB/s
                                           Interval WAL: 1856 writes, 1856 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    174.6      0.16              0.06        13    0.012       0      0       0.0       0.0
                                             L6      1/0    8.24 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    206.7    170.7      0.53              0.23        12    0.044     55K   6344       0.0       0.0
                                            Sum      1/0    8.24 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3    159.7    171.6      0.69              0.30        25    0.028     55K   6344       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.3    151.0    153.8      0.39              0.17        12    0.033     31K   3150       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    206.7    170.7      0.53              0.23        12    0.044     55K   6344       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    180.2      0.15              0.06        12    0.013       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.027, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.12 GB write, 0.07 MB/s write, 0.11 GB read, 0.06 MB/s read, 0.7 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5635c5d4b8d0#2 capacity: 304.00 MB usage: 11.57 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000229 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(718,11.11 MB,3.65304%) FilterBlock(26,162.11 KB,0.0520756%) IndexBlock(26,311.61 KB,0.100101%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 01 15:20:00 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 112 KiB/s wr, 9 op/s
Feb 01 15:20:01 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "57052b66-4ef2-422d-b6cb-d8da260acde1", "format": "json"}]: dispatch
Feb 01 15:20:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:20:01 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "1db05a86-0bcd-436c-91b4-4e5f418a5b3f", "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:20:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:1db05a86-0bcd-436c-91b4-4e5f418a5b3f, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Feb 01 15:20:01 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:1db05a86-0bcd-436c-91b4-4e5f418a5b3f, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Feb 01 15:20:02 compute-0 ceph-mon[75179]: pgmap v1044: 305 pgs: 305 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 112 KiB/s wr, 9 op/s
Feb 01 15:20:02 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 366 B/s rd, 116 KiB/s wr, 9 op/s
Feb 01 15:20:02 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "bob", "format": "json"}]: dispatch
Feb 01 15:20:02 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:20:02 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Feb 01 15:20:02 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb 01 15:20:02 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0)
Feb 01 15:20:02 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch
Feb 01 15:20:02 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Feb 01 15:20:02 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:20:02 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "bob", "format": "json"}]: dispatch
Feb 01 15:20:02 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:20:02 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2
Feb 01 15:20:02 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280/f4a26dae-3174-4ba3-b05a-4b9c53c9d5a2],prefix=session evict} (starting...)
Feb 01 15:20:02 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Feb 01 15:20:02 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:20:03 compute-0 sudo[248783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:20:03 compute-0 sudo[248783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:20:03 compute-0 sudo[248783]: pam_unix(sudo:session): session closed for user root
Feb 01 15:20:03 compute-0 sudo[248808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 15:20:03 compute-0 sudo[248808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:20:03 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "1db05a86-0bcd-436c-91b4-4e5f418a5b3f", "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:20:03 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb 01 15:20:03 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch
Feb 01 15:20:03 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Feb 01 15:20:03 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "a8272d77-fba1-474d-b266-1d9f610d6489", "format": "json"}]: dispatch
Feb 01 15:20:03 compute-0 sudo[248808]: pam_unix(sudo:session): session closed for user root
Feb 01 15:20:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a8272d77-fba1-474d-b266-1d9f610d6489, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:20:03 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:20:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 15:20:03 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:20:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 15:20:03 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a8272d77-fba1-474d-b266-1d9f610d6489, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:03 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:20:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 15:20:03 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:20:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 15:20:03 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:20:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:20:03 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:20:03 compute-0 sudo[248864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:20:03 compute-0 sudo[248864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:20:03 compute-0 sudo[248864]: pam_unix(sudo:session): session closed for user root
Feb 01 15:20:03 compute-0 sudo[248889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 15:20:03 compute-0 sudo[248889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:20:04 compute-0 podman[248926]: 2026-02-01 15:20:04.030279377 +0000 UTC m=+0.059864461 container create 816aa17895443143745169aeee40aefd93e6e1f50652642b23b8a6810edf4426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:20:04 compute-0 systemd[1]: Started libpod-conmon-816aa17895443143745169aeee40aefd93e6e1f50652642b23b8a6810edf4426.scope.
Feb 01 15:20:04 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:20:04 compute-0 podman[248926]: 2026-02-01 15:20:04.093602523 +0000 UTC m=+0.123187607 container init 816aa17895443143745169aeee40aefd93e6e1f50652642b23b8a6810edf4426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:20:04 compute-0 podman[248926]: 2026-02-01 15:20:04.00313583 +0000 UTC m=+0.032720994 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:20:04 compute-0 podman[248926]: 2026-02-01 15:20:04.09995856 +0000 UTC m=+0.129543624 container start 816aa17895443143745169aeee40aefd93e6e1f50652642b23b8a6810edf4426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:20:04 compute-0 podman[248926]: 2026-02-01 15:20:04.103829198 +0000 UTC m=+0.133414262 container attach 816aa17895443143745169aeee40aefd93e6e1f50652642b23b8a6810edf4426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 01 15:20:04 compute-0 zealous_sanderson[248944]: 167 167
Feb 01 15:20:04 compute-0 systemd[1]: libpod-816aa17895443143745169aeee40aefd93e6e1f50652642b23b8a6810edf4426.scope: Deactivated successfully.
Feb 01 15:20:04 compute-0 podman[248926]: 2026-02-01 15:20:04.10461101 +0000 UTC m=+0.134196064 container died 816aa17895443143745169aeee40aefd93e6e1f50652642b23b8a6810edf4426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:20:04 compute-0 podman[248940]: 2026-02-01 15:20:04.114200418 +0000 UTC m=+0.053383250 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 01 15:20:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-18cb70bf2e328a264ef1194edfe7bd8db452b65f0c744a515803cc7d23aa5d27-merged.mount: Deactivated successfully.
Feb 01 15:20:04 compute-0 podman[248926]: 2026-02-01 15:20:04.137723004 +0000 UTC m=+0.167308068 container remove 816aa17895443143745169aeee40aefd93e6e1f50652642b23b8a6810edf4426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:20:04 compute-0 systemd[1]: libpod-conmon-816aa17895443143745169aeee40aefd93e6e1f50652642b23b8a6810edf4426.scope: Deactivated successfully.
Feb 01 15:20:04 compute-0 podman[248943]: 2026-02-01 15:20:04.14582637 +0000 UTC m=+0.084893409 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 01 15:20:04 compute-0 ceph-mon[75179]: pgmap v1045: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 366 B/s rd, 116 KiB/s wr, 9 op/s
Feb 01 15:20:04 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "bob", "format": "json"}]: dispatch
Feb 01 15:20:04 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "auth_id": "bob", "format": "json"}]: dispatch
Feb 01 15:20:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:20:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:20:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:20:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:20:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:20:04 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:20:04 compute-0 podman[249011]: 2026-02-01 15:20:04.259261183 +0000 UTC m=+0.035219863 container create b471f7f305fb609f9ca1b93644f3cf25740bf4d5b704b3f21c4e5127d31059bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_moser, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 01 15:20:04 compute-0 systemd[1]: Started libpod-conmon-b471f7f305fb609f9ca1b93644f3cf25740bf4d5b704b3f21c4e5127d31059bf.scope.
Feb 01 15:20:04 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:20:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b804b36fac204e698a004db950e9974ca6b4d4b6cc582439679a87b1b091f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:20:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b804b36fac204e698a004db950e9974ca6b4d4b6cc582439679a87b1b091f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:20:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b804b36fac204e698a004db950e9974ca6b4d4b6cc582439679a87b1b091f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:20:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b804b36fac204e698a004db950e9974ca6b4d4b6cc582439679a87b1b091f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:20:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b804b36fac204e698a004db950e9974ca6b4d4b6cc582439679a87b1b091f8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 15:20:04 compute-0 podman[249011]: 2026-02-01 15:20:04.246616491 +0000 UTC m=+0.022575191 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:20:04 compute-0 podman[249011]: 2026-02-01 15:20:04.345828228 +0000 UTC m=+0.121786908 container init b471f7f305fb609f9ca1b93644f3cf25740bf4d5b704b3f21c4e5127d31059bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 01 15:20:04 compute-0 podman[249011]: 2026-02-01 15:20:04.354637583 +0000 UTC m=+0.130596263 container start b471f7f305fb609f9ca1b93644f3cf25740bf4d5b704b3f21c4e5127d31059bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 01 15:20:04 compute-0 podman[249011]: 2026-02-01 15:20:04.357652488 +0000 UTC m=+0.133611168 container attach b471f7f305fb609f9ca1b93644f3cf25740bf4d5b704b3f21c4e5127d31059bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 01 15:20:04 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 108 KiB/s wr, 8 op/s
Feb 01 15:20:04 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "1db05a86-0bcd-436c-91b4-4e5f418a5b3f", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:1db05a86-0bcd-436c-91b4-4e5f418a5b3f, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Feb 01 15:20:04 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:1db05a86-0bcd-436c-91b4-4e5f418a5b3f, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Feb 01 15:20:04 compute-0 jolly_moser[249027]: --> passed data devices: 0 physical, 3 LVM
Feb 01 15:20:04 compute-0 jolly_moser[249027]: --> All data devices are unavailable
Feb 01 15:20:04 compute-0 systemd[1]: libpod-b471f7f305fb609f9ca1b93644f3cf25740bf4d5b704b3f21c4e5127d31059bf.scope: Deactivated successfully.
Feb 01 15:20:04 compute-0 podman[249047]: 2026-02-01 15:20:04.955502812 +0000 UTC m=+0.038750422 container died b471f7f305fb609f9ca1b93644f3cf25740bf4d5b704b3f21c4e5127d31059bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_moser, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:20:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-08b804b36fac204e698a004db950e9974ca6b4d4b6cc582439679a87b1b091f8-merged.mount: Deactivated successfully.
Feb 01 15:20:05 compute-0 podman[249047]: 2026-02-01 15:20:05.004013305 +0000 UTC m=+0.087260875 container remove b471f7f305fb609f9ca1b93644f3cf25740bf4d5b704b3f21c4e5127d31059bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 01 15:20:05 compute-0 systemd[1]: libpod-conmon-b471f7f305fb609f9ca1b93644f3cf25740bf4d5b704b3f21c4e5127d31059bf.scope: Deactivated successfully.
Feb 01 15:20:05 compute-0 sudo[248889]: pam_unix(sudo:session): session closed for user root
Feb 01 15:20:05 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "93d4f46b-9bfd-433e-b5d5-9e9b76f62d85", "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:20:05 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:93d4f46b-9bfd-433e-b5d5-9e9b76f62d85, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Feb 01 15:20:05 compute-0 sudo[249061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:20:05 compute-0 sudo[249061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:20:05 compute-0 sudo[249061]: pam_unix(sudo:session): session closed for user root
Feb 01 15:20:05 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:93d4f46b-9bfd-433e-b5d5-9e9b76f62d85, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Feb 01 15:20:05 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "a8272d77-fba1-474d-b266-1d9f610d6489", "format": "json"}]: dispatch
Feb 01 15:20:05 compute-0 sudo[249086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 15:20:05 compute-0 sudo[249086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:20:05 compute-0 podman[249123]: 2026-02-01 15:20:05.478776836 +0000 UTC m=+0.051353343 container create e4ab152e53c5970430a69e5c239af4a27b2e1227c254dce01db24d8525c9cd1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ritchie, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:20:05 compute-0 systemd[1]: Started libpod-conmon-e4ab152e53c5970430a69e5c239af4a27b2e1227c254dce01db24d8525c9cd1e.scope.
Feb 01 15:20:05 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:20:05 compute-0 podman[249123]: 2026-02-01 15:20:05.456752232 +0000 UTC m=+0.029328789 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:20:05 compute-0 podman[249123]: 2026-02-01 15:20:05.554000434 +0000 UTC m=+0.126576991 container init e4ab152e53c5970430a69e5c239af4a27b2e1227c254dce01db24d8525c9cd1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ritchie, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:20:05 compute-0 podman[249123]: 2026-02-01 15:20:05.562665086 +0000 UTC m=+0.135241593 container start e4ab152e53c5970430a69e5c239af4a27b2e1227c254dce01db24d8525c9cd1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ritchie, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:20:05 compute-0 podman[249123]: 2026-02-01 15:20:05.566827502 +0000 UTC m=+0.139403989 container attach e4ab152e53c5970430a69e5c239af4a27b2e1227c254dce01db24d8525c9cd1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ritchie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:20:05 compute-0 dazzling_ritchie[249141]: 167 167
Feb 01 15:20:05 compute-0 systemd[1]: libpod-e4ab152e53c5970430a69e5c239af4a27b2e1227c254dce01db24d8525c9cd1e.scope: Deactivated successfully.
Feb 01 15:20:05 compute-0 podman[249123]: 2026-02-01 15:20:05.570259797 +0000 UTC m=+0.142836314 container died e4ab152e53c5970430a69e5c239af4a27b2e1227c254dce01db24d8525c9cd1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 01 15:20:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4a6d5b793148c5c2d28f7a1ef13d6e705678c2699dc73ee6bcc1b7efacf8280-merged.mount: Deactivated successfully.
Feb 01 15:20:05 compute-0 podman[249123]: 2026-02-01 15:20:05.612542637 +0000 UTC m=+0.185119154 container remove e4ab152e53c5970430a69e5c239af4a27b2e1227c254dce01db24d8525c9cd1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ritchie, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 01 15:20:05 compute-0 systemd[1]: libpod-conmon-e4ab152e53c5970430a69e5c239af4a27b2e1227c254dce01db24d8525c9cd1e.scope: Deactivated successfully.
Feb 01 15:20:05 compute-0 podman[249164]: 2026-02-01 15:20:05.818732257 +0000 UTC m=+0.063985815 container create eb1e80f18dfda7f1ecf2347fc46c77d07274df05b7a4caf06b1331b028ec3c0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:20:05 compute-0 systemd[1]: Started libpod-conmon-eb1e80f18dfda7f1ecf2347fc46c77d07274df05b7a4caf06b1331b028ec3c0d.scope.
Feb 01 15:20:05 compute-0 podman[249164]: 2026-02-01 15:20:05.791124147 +0000 UTC m=+0.036377765 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:20:05 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:20:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a459af9dde11064435b2664181b2b1556bf28c040e00e5b5c9a3830409c551bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:20:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a459af9dde11064435b2664181b2b1556bf28c040e00e5b5c9a3830409c551bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:20:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a459af9dde11064435b2664181b2b1556bf28c040e00e5b5c9a3830409c551bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:20:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a459af9dde11064435b2664181b2b1556bf28c040e00e5b5c9a3830409c551bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:20:05 compute-0 podman[249164]: 2026-02-01 15:20:05.921517874 +0000 UTC m=+0.166771432 container init eb1e80f18dfda7f1ecf2347fc46c77d07274df05b7a4caf06b1331b028ec3c0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cartwright, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:20:05 compute-0 podman[249164]: 2026-02-01 15:20:05.937547781 +0000 UTC m=+0.182801339 container start eb1e80f18dfda7f1ecf2347fc46c77d07274df05b7a4caf06b1331b028ec3c0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:20:05 compute-0 podman[249164]: 2026-02-01 15:20:05.94252678 +0000 UTC m=+0.187780318 container attach eb1e80f18dfda7f1ecf2347fc46c77d07274df05b7a4caf06b1331b028ec3c0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 01 15:20:06 compute-0 ceph-mon[75179]: pgmap v1046: 305 pgs: 305 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 108 KiB/s wr, 8 op/s
Feb 01 15:20:06 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "1db05a86-0bcd-436c-91b4-4e5f418a5b3f", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:06 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "93d4f46b-9bfd-433e-b5d5-9e9b76f62d85", "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:20:06 compute-0 tender_cartwright[249180]: {
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:     "0": [
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:         {
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "devices": [
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "/dev/loop3"
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             ],
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "lv_name": "ceph_lv0",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "lv_size": "21470642176",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "name": "ceph_lv0",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "tags": {
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.cluster_name": "ceph",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.crush_device_class": "",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.encrypted": "0",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.objectstore": "bluestore",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.osd_id": "0",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.type": "block",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.vdo": "0",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.with_tpm": "0"
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             },
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "type": "block",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "vg_name": "ceph_vg0"
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:         }
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:     ],
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:     "1": [
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:         {
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "devices": [
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "/dev/loop4"
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             ],
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "lv_name": "ceph_lv1",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "lv_size": "21470642176",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "name": "ceph_lv1",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "tags": {
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.cluster_name": "ceph",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.crush_device_class": "",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.encrypted": "0",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.objectstore": "bluestore",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.osd_id": "1",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.type": "block",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.vdo": "0",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.with_tpm": "0"
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             },
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "type": "block",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "vg_name": "ceph_vg1"
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:         }
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:     ],
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:     "2": [
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:         {
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "devices": [
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "/dev/loop5"
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             ],
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "lv_name": "ceph_lv2",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "lv_size": "21470642176",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "name": "ceph_lv2",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "tags": {
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.cluster_name": "ceph",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.crush_device_class": "",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.encrypted": "0",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.objectstore": "bluestore",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.osd_id": "2",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.type": "block",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.vdo": "0",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:                 "ceph.with_tpm": "0"
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             },
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "type": "block",
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:             "vg_name": "ceph_vg2"
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:         }
Feb 01 15:20:06 compute-0 tender_cartwright[249180]:     ]
Feb 01 15:20:06 compute-0 tender_cartwright[249180]: }
Feb 01 15:20:06 compute-0 systemd[1]: libpod-eb1e80f18dfda7f1ecf2347fc46c77d07274df05b7a4caf06b1331b028ec3c0d.scope: Deactivated successfully.
Feb 01 15:20:06 compute-0 podman[249164]: 2026-02-01 15:20:06.251520567 +0000 UTC m=+0.496774155 container died eb1e80f18dfda7f1ecf2347fc46c77d07274df05b7a4caf06b1331b028ec3c0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:20:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-a459af9dde11064435b2664181b2b1556bf28c040e00e5b5c9a3830409c551bb-merged.mount: Deactivated successfully.
Feb 01 15:20:06 compute-0 podman[249164]: 2026-02-01 15:20:06.305224275 +0000 UTC m=+0.550477833 container remove eb1e80f18dfda7f1ecf2347fc46c77d07274df05b7a4caf06b1331b028ec3c0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 01 15:20:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:20:06 compute-0 systemd[1]: libpod-conmon-eb1e80f18dfda7f1ecf2347fc46c77d07274df05b7a4caf06b1331b028ec3c0d.scope: Deactivated successfully.
Feb 01 15:20:06 compute-0 sudo[249086]: pam_unix(sudo:session): session closed for user root
Feb 01 15:20:06 compute-0 sudo[249200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:20:06 compute-0 sudo[249200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:20:06 compute-0 sudo[249200]: pam_unix(sudo:session): session closed for user root
Feb 01 15:20:06 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 147 KiB/s wr, 11 op/s
Feb 01 15:20:06 compute-0 sudo[249225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 15:20:06 compute-0 sudo[249225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:20:06 compute-0 podman[249262]: 2026-02-01 15:20:06.83712764 +0000 UTC m=+0.057139725 container create b315697aa6f218d920909fd40627974deb37e80a194b5f2e5531be8b8ffc3bd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 01 15:20:06 compute-0 systemd[1]: Started libpod-conmon-b315697aa6f218d920909fd40627974deb37e80a194b5f2e5531be8b8ffc3bd4.scope.
Feb 01 15:20:06 compute-0 podman[249262]: 2026-02-01 15:20:06.812920194 +0000 UTC m=+0.032932329 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:20:06 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:20:06 compute-0 podman[249262]: 2026-02-01 15:20:06.937845759 +0000 UTC m=+0.157857894 container init b315697aa6f218d920909fd40627974deb37e80a194b5f2e5531be8b8ffc3bd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 01 15:20:06 compute-0 podman[249262]: 2026-02-01 15:20:06.947774755 +0000 UTC m=+0.167786820 container start b315697aa6f218d920909fd40627974deb37e80a194b5f2e5531be8b8ffc3bd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bhaskara, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Feb 01 15:20:06 compute-0 podman[249262]: 2026-02-01 15:20:06.952684452 +0000 UTC m=+0.172696587 container attach b315697aa6f218d920909fd40627974deb37e80a194b5f2e5531be8b8ffc3bd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bhaskara, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:20:06 compute-0 relaxed_bhaskara[249279]: 167 167
Feb 01 15:20:06 compute-0 systemd[1]: libpod-b315697aa6f218d920909fd40627974deb37e80a194b5f2e5531be8b8ffc3bd4.scope: Deactivated successfully.
Feb 01 15:20:06 compute-0 conmon[249279]: conmon b315697aa6f218d92090 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b315697aa6f218d920909fd40627974deb37e80a194b5f2e5531be8b8ffc3bd4.scope/container/memory.events
Feb 01 15:20:06 compute-0 podman[249262]: 2026-02-01 15:20:06.955858921 +0000 UTC m=+0.175871006 container died b315697aa6f218d920909fd40627974deb37e80a194b5f2e5531be8b8ffc3bd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bhaskara, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Feb 01 15:20:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb998b70a45c22e79e2377a4e262ebdd8658d83b49b41e92191a079641dc85ba-merged.mount: Deactivated successfully.
Feb 01 15:20:07 compute-0 podman[249262]: 2026-02-01 15:20:07.001396631 +0000 UTC m=+0.221408716 container remove b315697aa6f218d920909fd40627974deb37e80a194b5f2e5531be8b8ffc3bd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 01 15:20:07 compute-0 systemd[1]: libpod-conmon-b315697aa6f218d920909fd40627974deb37e80a194b5f2e5531be8b8ffc3bd4.scope: Deactivated successfully.
Feb 01 15:20:07 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "format": "json"}]: dispatch
Feb 01 15:20:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:20:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:20:07 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:20:07.065+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c5365cf8-68f4-4bb7-b1f2-7a560b4f3280' of type subvolume
Feb 01 15:20:07 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c5365cf8-68f4-4bb7-b1f2-7a560b4f3280' of type subvolume
Feb 01 15:20:07 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:20:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c5365cf8-68f4-4bb7-b1f2-7a560b4f3280'' moved to trashcan
Feb 01 15:20:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:20:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c5365cf8-68f4-4bb7-b1f2-7a560b4f3280, vol_name:cephfs) < ""
Feb 01 15:20:07 compute-0 podman[249303]: 2026-02-01 15:20:07.172288267 +0000 UTC m=+0.048671838 container create a94d4ad384982dc99ce3d99c7f37772d1a417963752aadc9bb5a198489c738bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Feb 01 15:20:07 compute-0 systemd[1]: Started libpod-conmon-a94d4ad384982dc99ce3d99c7f37772d1a417963752aadc9bb5a198489c738bb.scope.
Feb 01 15:20:07 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c3de69999209b0116b2a9d323490080dcb2ee6a042aa5738a6cb7f97433616/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c3de69999209b0116b2a9d323490080dcb2ee6a042aa5738a6cb7f97433616/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c3de69999209b0116b2a9d323490080dcb2ee6a042aa5738a6cb7f97433616/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c3de69999209b0116b2a9d323490080dcb2ee6a042aa5738a6cb7f97433616/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:20:07 compute-0 podman[249303]: 2026-02-01 15:20:07.148591006 +0000 UTC m=+0.024974627 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:20:07 compute-0 podman[249303]: 2026-02-01 15:20:07.260406965 +0000 UTC m=+0.136790616 container init a94d4ad384982dc99ce3d99c7f37772d1a417963752aadc9bb5a198489c738bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 01 15:20:07 compute-0 podman[249303]: 2026-02-01 15:20:07.274897359 +0000 UTC m=+0.151280910 container start a94d4ad384982dc99ce3d99c7f37772d1a417963752aadc9bb5a198489c738bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wilson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:20:07 compute-0 podman[249303]: 2026-02-01 15:20:07.277977435 +0000 UTC m=+0.154360986 container attach a94d4ad384982dc99ce3d99c7f37772d1a417963752aadc9bb5a198489c738bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 01 15:20:07 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "a8272d77-fba1-474d-b266-1d9f610d6489_412605f1-3f08-4d5b-b5fa-295e1cba97d5", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a8272d77-fba1-474d-b266-1d9f610d6489_412605f1-3f08-4d5b-b5fa-295e1cba97d5, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb 01 15:20:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb 01 15:20:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a8272d77-fba1-474d-b266-1d9f610d6489_412605f1-3f08-4d5b-b5fa-295e1cba97d5, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:07 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "a8272d77-fba1-474d-b266-1d9f610d6489", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a8272d77-fba1-474d-b266-1d9f610d6489, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb 01 15:20:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb 01 15:20:07 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a8272d77-fba1-474d-b266-1d9f610d6489, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:20:07.815 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:20:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:20:07.816 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:20:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:20:07.816 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:20:07 compute-0 lvm[249398]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:20:07 compute-0 lvm[249398]: VG ceph_vg0 finished
Feb 01 15:20:07 compute-0 lvm[249399]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:20:07 compute-0 lvm[249399]: VG ceph_vg1 finished
Feb 01 15:20:07 compute-0 lvm[249401]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:20:07 compute-0 lvm[249401]: VG ceph_vg2 finished
Feb 01 15:20:08 compute-0 fervent_wilson[249320]: {}
Feb 01 15:20:08 compute-0 systemd[1]: libpod-a94d4ad384982dc99ce3d99c7f37772d1a417963752aadc9bb5a198489c738bb.scope: Deactivated successfully.
Feb 01 15:20:08 compute-0 systemd[1]: libpod-a94d4ad384982dc99ce3d99c7f37772d1a417963752aadc9bb5a198489c738bb.scope: Consumed 1.194s CPU time.
Feb 01 15:20:08 compute-0 podman[249303]: 2026-02-01 15:20:08.061134347 +0000 UTC m=+0.937517968 container died a94d4ad384982dc99ce3d99c7f37772d1a417963752aadc9bb5a198489c738bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 01 15:20:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-00c3de69999209b0116b2a9d323490080dcb2ee6a042aa5738a6cb7f97433616-merged.mount: Deactivated successfully.
Feb 01 15:20:08 compute-0 podman[249303]: 2026-02-01 15:20:08.101136763 +0000 UTC m=+0.977520304 container remove a94d4ad384982dc99ce3d99c7f37772d1a417963752aadc9bb5a198489c738bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:20:08 compute-0 systemd[1]: libpod-conmon-a94d4ad384982dc99ce3d99c7f37772d1a417963752aadc9bb5a198489c738bb.scope: Deactivated successfully.
Feb 01 15:20:08 compute-0 sudo[249225]: pam_unix(sudo:session): session closed for user root
Feb 01 15:20:08 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:20:08 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:20:08 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:20:08 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:20:08 compute-0 ceph-mon[75179]: pgmap v1047: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 147 KiB/s wr, 11 op/s
Feb 01 15:20:08 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "format": "json"}]: dispatch
Feb 01 15:20:08 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c5365cf8-68f4-4bb7-b1f2-7a560b4f3280", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:20:08 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:20:08 compute-0 sudo[249416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 15:20:08 compute-0 sudo[249416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:20:08 compute-0 sudo[249416]: pam_unix(sudo:session): session closed for user root
Feb 01 15:20:08 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 101 KiB/s wr, 7 op/s
Feb 01 15:20:08 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "93d4f46b-9bfd-433e-b5d5-9e9b76f62d85", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:08 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:93d4f46b-9bfd-433e-b5d5-9e9b76f62d85, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Feb 01 15:20:08 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:93d4f46b-9bfd-433e-b5d5-9e9b76f62d85, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Feb 01 15:20:09 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "a8272d77-fba1-474d-b266-1d9f610d6489_412605f1-3f08-4d5b-b5fa-295e1cba97d5", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:09 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "a8272d77-fba1-474d-b266-1d9f610d6489", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:09 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "56740215-53be-496a-bb36-0fdd2c1498f9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:20:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:56740215-53be-496a-bb36-0fdd2c1498f9, vol_name:cephfs) < ""
Feb 01 15:20:09 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/56740215-53be-496a-bb36-0fdd2c1498f9/7fe7cc3e-ade4-459d-8ee4-4b9d4afebbf6'.
Feb 01 15:20:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/56740215-53be-496a-bb36-0fdd2c1498f9/.meta.tmp'
Feb 01 15:20:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/56740215-53be-496a-bb36-0fdd2c1498f9/.meta.tmp' to config b'/volumes/_nogroup/56740215-53be-496a-bb36-0fdd2c1498f9/.meta'
Feb 01 15:20:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:56740215-53be-496a-bb36-0fdd2c1498f9, vol_name:cephfs) < ""
Feb 01 15:20:09 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "56740215-53be-496a-bb36-0fdd2c1498f9", "format": "json"}]: dispatch
Feb 01 15:20:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:56740215-53be-496a-bb36-0fdd2c1498f9, vol_name:cephfs) < ""
Feb 01 15:20:09 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:56740215-53be-496a-bb36-0fdd2c1498f9, vol_name:cephfs) < ""
Feb 01 15:20:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:20:09 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:20:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Feb 01 15:20:10 compute-0 ceph-mon[75179]: pgmap v1048: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 101 KiB/s wr, 7 op/s
Feb 01 15:20:10 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "93d4f46b-9bfd-433e-b5d5-9e9b76f62d85", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:10 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:20:10 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Feb 01 15:20:10 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Feb 01 15:20:10 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 121 KiB/s wr, 9 op/s
Feb 01 15:20:11 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "56740215-53be-496a-bb36-0fdd2c1498f9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:20:11 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "56740215-53be-496a-bb36-0fdd2c1498f9", "format": "json"}]: dispatch
Feb 01 15:20:11 compute-0 ceph-mon[75179]: osdmap e163: 3 total, 3 up, 3 in
Feb 01 15:20:11 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "f6dd2a46-7c0a-4607-8275-a93a5c9b55f1", "format": "json"}]: dispatch
Feb 01 15:20:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f6dd2a46-7c0a-4607-8275-a93a5c9b55f1, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:11 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f6dd2a46-7c0a-4607-8275-a93a5c9b55f1, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:20:12 compute-0 ceph-mon[75179]: pgmap v1050: 305 pgs: 305 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 121 KiB/s wr, 9 op/s
Feb 01 15:20:12 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 99 KiB/s wr, 9 op/s
Feb 01 15:20:12 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "56740215-53be-496a-bb36-0fdd2c1498f9", "format": "json"}]: dispatch
Feb 01 15:20:12 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:56740215-53be-496a-bb36-0fdd2c1498f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:20:12 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:56740215-53be-496a-bb36-0fdd2c1498f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:20:12 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:20:12.836+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '56740215-53be-496a-bb36-0fdd2c1498f9' of type subvolume
Feb 01 15:20:12 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '56740215-53be-496a-bb36-0fdd2c1498f9' of type subvolume
Feb 01 15:20:12 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "56740215-53be-496a-bb36-0fdd2c1498f9", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:12 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:56740215-53be-496a-bb36-0fdd2c1498f9, vol_name:cephfs) < ""
Feb 01 15:20:12 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/56740215-53be-496a-bb36-0fdd2c1498f9'' moved to trashcan
Feb 01 15:20:12 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:20:12 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:56740215-53be-496a-bb36-0fdd2c1498f9, vol_name:cephfs) < ""
Feb 01 15:20:13 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "f6dd2a46-7c0a-4607-8275-a93a5c9b55f1", "format": "json"}]: dispatch
Feb 01 15:20:13 compute-0 ceph-mon[75179]: pgmap v1051: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 99 KiB/s wr, 9 op/s
Feb 01 15:20:13 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "56740215-53be-496a-bb36-0fdd2c1498f9", "format": "json"}]: dispatch
Feb 01 15:20:13 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "56740215-53be-496a-bb36-0fdd2c1498f9", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:14 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 99 KiB/s wr, 9 op/s
Feb 01 15:20:15 compute-0 ceph-mon[75179]: pgmap v1052: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 99 KiB/s wr, 9 op/s
Feb 01 15:20:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:20:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Feb 01 15:20:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Feb 01 15:20:16 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Feb 01 15:20:16 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 102 KiB/s wr, 9 op/s
Feb 01 15:20:16 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "81f31f1a-09e0-4333-ae71-05dc6131f94c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:20:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:81f31f1a-09e0-4333-ae71-05dc6131f94c, vol_name:cephfs) < ""
Feb 01 15:20:16 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/81f31f1a-09e0-4333-ae71-05dc6131f94c/7b4dae3c-af1f-4fca-9e91-24f56e0bd08e'.
Feb 01 15:20:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/81f31f1a-09e0-4333-ae71-05dc6131f94c/.meta.tmp'
Feb 01 15:20:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/81f31f1a-09e0-4333-ae71-05dc6131f94c/.meta.tmp' to config b'/volumes/_nogroup/81f31f1a-09e0-4333-ae71-05dc6131f94c/.meta'
Feb 01 15:20:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:81f31f1a-09e0-4333-ae71-05dc6131f94c, vol_name:cephfs) < ""
Feb 01 15:20:16 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "81f31f1a-09e0-4333-ae71-05dc6131f94c", "format": "json"}]: dispatch
Feb 01 15:20:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:81f31f1a-09e0-4333-ae71-05dc6131f94c, vol_name:cephfs) < ""
Feb 01 15:20:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:81f31f1a-09e0-4333-ae71-05dc6131f94c, vol_name:cephfs) < ""
Feb 01 15:20:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:20:16 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:20:16 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "f6dd2a46-7c0a-4607-8275-a93a5c9b55f1_ec81534c-37e1-436f-8b77-bcabec4a8b35", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f6dd2a46-7c0a-4607-8275-a93a5c9b55f1_ec81534c-37e1-436f-8b77-bcabec4a8b35, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb 01 15:20:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb 01 15:20:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f6dd2a46-7c0a-4607-8275-a93a5c9b55f1_ec81534c-37e1-436f-8b77-bcabec4a8b35, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:16 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "f6dd2a46-7c0a-4607-8275-a93a5c9b55f1", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f6dd2a46-7c0a-4607-8275-a93a5c9b55f1, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb 01 15:20:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb 01 15:20:16 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f6dd2a46-7c0a-4607-8275-a93a5c9b55f1, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:17 compute-0 ceph-mon[75179]: osdmap e164: 3 total, 3 up, 3 in
Feb 01 15:20:17 compute-0 ceph-mon[75179]: pgmap v1054: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 102 KiB/s wr, 9 op/s
Feb 01 15:20:17 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "81f31f1a-09e0-4333-ae71-05dc6131f94c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:20:17 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "81f31f1a-09e0-4333-ae71-05dc6131f94c", "format": "json"}]: dispatch
Feb 01 15:20:17 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:20:17 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "f6dd2a46-7c0a-4607-8275-a93a5c9b55f1_ec81534c-37e1-436f-8b77-bcabec4a8b35", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:17 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "f6dd2a46-7c0a-4607-8275-a93a5c9b55f1", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:20:17
Feb 01 15:20:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:20:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:20:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['volumes', '.mgr', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta']
Feb 01 15:20:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:20:18 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 865 B/s rd, 98 KiB/s wr, 9 op/s
Feb 01 15:20:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:20:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:20:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:20:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:20:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:20:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:20:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:20:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:20:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:20:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:20:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:20:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:20:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:20:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:20:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:20:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:20:19 compute-0 ceph-mon[75179]: pgmap v1055: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 865 B/s rd, 98 KiB/s wr, 9 op/s
Feb 01 15:20:20 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "a38fa840-3a94-4f26-a23d-fd03823471c0", "format": "json"}]: dispatch
Feb 01 15:20:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a38fa840-3a94-4f26-a23d-fd03823471c0, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a38fa840-3a94-4f26-a23d-fd03823471c0, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:20 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "81f31f1a-09e0-4333-ae71-05dc6131f94c", "format": "json"}]: dispatch
Feb 01 15:20:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:81f31f1a-09e0-4333-ae71-05dc6131f94c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:20:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:81f31f1a-09e0-4333-ae71-05dc6131f94c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:20:20 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:20:20.226+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '81f31f1a-09e0-4333-ae71-05dc6131f94c' of type subvolume
Feb 01 15:20:20 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '81f31f1a-09e0-4333-ae71-05dc6131f94c' of type subvolume
Feb 01 15:20:20 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "81f31f1a-09e0-4333-ae71-05dc6131f94c", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:81f31f1a-09e0-4333-ae71-05dc6131f94c, vol_name:cephfs) < ""
Feb 01 15:20:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/81f31f1a-09e0-4333-ae71-05dc6131f94c'' moved to trashcan
Feb 01 15:20:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:20:20 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:81f31f1a-09e0-4333-ae71-05dc6131f94c, vol_name:cephfs) < ""
Feb 01 15:20:20 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 81 KiB/s wr, 7 op/s
Feb 01 15:20:20 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "a38fa840-3a94-4f26-a23d-fd03823471c0", "format": "json"}]: dispatch
Feb 01 15:20:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:20:21 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "81f31f1a-09e0-4333-ae71-05dc6131f94c", "format": "json"}]: dispatch
Feb 01 15:20:21 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "81f31f1a-09e0-4333-ae71-05dc6131f94c", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:21 compute-0 ceph-mon[75179]: pgmap v1056: 305 pgs: 305 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 81 KiB/s wr, 7 op/s
Feb 01 15:20:22 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 78 KiB/s wr, 6 op/s
Feb 01 15:20:23 compute-0 ceph-mon[75179]: pgmap v1057: 305 pgs: 305 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 78 KiB/s wr, 6 op/s
Feb 01 15:20:23 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "a38fa840-3a94-4f26-a23d-fd03823471c0_85344f13-853b-4a08-8ae5-5931230f8f33", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:23 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a38fa840-3a94-4f26-a23d-fd03823471c0_85344f13-853b-4a08-8ae5-5931230f8f33, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:23 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb 01 15:20:23 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb 01 15:20:23 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a38fa840-3a94-4f26-a23d-fd03823471c0_85344f13-853b-4a08-8ae5-5931230f8f33, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:24 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "a38fa840-3a94-4f26-a23d-fd03823471c0", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a38fa840-3a94-4f26-a23d-fd03823471c0, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb 01 15:20:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb 01 15:20:24 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a38fa840-3a94-4f26-a23d-fd03823471c0, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:24 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 78 KiB/s wr, 6 op/s
Feb 01 15:20:24 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "a38fa840-3a94-4f26-a23d-fd03823471c0_85344f13-853b-4a08-8ae5-5931230f8f33", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:24 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "a38fa840-3a94-4f26-a23d-fd03823471c0", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Feb 01 15:20:25 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Feb 01 15:20:25 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Feb 01 15:20:25 compute-0 ceph-mon[75179]: pgmap v1058: 305 pgs: 305 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 78 KiB/s wr, 6 op/s
Feb 01 15:20:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:20:26 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 613 B/s rd, 72 KiB/s wr, 7 op/s
Feb 01 15:20:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Feb 01 15:20:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Feb 01 15:20:26 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Feb 01 15:20:26 compute-0 ceph-mon[75179]: osdmap e165: 3 total, 3 up, 3 in
Feb 01 15:20:26 compute-0 ceph-mon[75179]: osdmap e166: 3 total, 3 up, 3 in
Feb 01 15:20:27 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "4d19973b-9f66-4818-a82a-a0723e2292db", "format": "json"}]: dispatch
Feb 01 15:20:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4d19973b-9f66-4818-a82a-a0723e2292db, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:27 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4d19973b-9f66-4818-a82a-a0723e2292db, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:27 compute-0 ceph-mon[75179]: pgmap v1060: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 613 B/s rd, 72 KiB/s wr, 7 op/s
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665971230985504 of space, bias 1.0, pg target 0.1997913692956512 quantized to 32 (current 32)
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005303337260864361 of space, bias 4.0, pg target 0.6364004713037233 quantized to 16 (current 16)
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:20:28 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 766 B/s rd, 90 KiB/s wr, 9 op/s
Feb 01 15:20:28 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "4d19973b-9f66-4818-a82a-a0723e2292db", "format": "json"}]: dispatch
Feb 01 15:20:29 compute-0 ceph-mon[75179]: pgmap v1062: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 766 B/s rd, 90 KiB/s wr, 9 op/s
Feb 01 15:20:30 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "4d19973b-9f66-4818-a82a-a0723e2292db_9883d1fb-bbfe-49b8-87f6-937369add4a2", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:30 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4d19973b-9f66-4818-a82a-a0723e2292db_9883d1fb-bbfe-49b8-87f6-937369add4a2, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:30 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb 01 15:20:30 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb 01 15:20:30 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4d19973b-9f66-4818-a82a-a0723e2292db_9883d1fb-bbfe-49b8-87f6-937369add4a2, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:30 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "4d19973b-9f66-4818-a82a-a0723e2292db", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:30 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4d19973b-9f66-4818-a82a-a0723e2292db, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:30 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb 01 15:20:30 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb 01 15:20:30 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4d19973b-9f66-4818-a82a-a0723e2292db, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:30 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 29 KiB/s wr, 4 op/s
Feb 01 15:20:30 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "4d19973b-9f66-4818-a82a-a0723e2292db_9883d1fb-bbfe-49b8-87f6-937369add4a2", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:30 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "4d19973b-9f66-4818-a82a-a0723e2292db", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:20:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Feb 01 15:20:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Feb 01 15:20:31 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Feb 01 15:20:31 compute-0 ceph-mon[75179]: pgmap v1063: 305 pgs: 305 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 29 KiB/s wr, 4 op/s
Feb 01 15:20:31 compute-0 ceph-mon[75179]: osdmap e167: 3 total, 3 up, 3 in
Feb 01 15:20:32 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 592 B/s rd, 91 KiB/s wr, 7 op/s
Feb 01 15:20:33 compute-0 ceph-mon[75179]: pgmap v1065: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 592 B/s rd, 91 KiB/s wr, 7 op/s
Feb 01 15:20:34 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s wr, 2 op/s
Feb 01 15:20:34 compute-0 podman[249441]: 2026-02-01 15:20:34.994650818 +0000 UTC m=+0.066815885 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 01 15:20:35 compute-0 podman[249442]: 2026-02-01 15:20:35.029272803 +0000 UTC m=+0.101597505 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Feb 01 15:20:35 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "59b862f6-24f2-457d-8400-334f4d4f6ea3", "format": "json"}]: dispatch
Feb 01 15:20:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:59b862f6-24f2-457d-8400-334f4d4f6ea3, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:35 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:59b862f6-24f2-457d-8400-334f4d4f6ea3, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:35 compute-0 ceph-mon[75179]: pgmap v1066: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s wr, 2 op/s
Feb 01 15:20:35 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "59b862f6-24f2-457d-8400-334f4d4f6ea3", "format": "json"}]: dispatch
Feb 01 15:20:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:20:36 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 206 B/s rd, 54 KiB/s wr, 3 op/s
Feb 01 15:20:36 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:20:36.899 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:64:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '76:0c:13:64:99:39'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 01 15:20:36 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:20:36.902 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 01 15:20:36 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:20:36.903 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 01 15:20:37 compute-0 ceph-mon[75179]: pgmap v1067: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 206 B/s rd, 54 KiB/s wr, 3 op/s
Feb 01 15:20:38 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "59b862f6-24f2-457d-8400-334f4d4f6ea3_3c99591e-4443-46aa-892b-59f2735dca00", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:38 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:59b862f6-24f2-457d-8400-334f4d4f6ea3_3c99591e-4443-46aa-892b-59f2735dca00, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:38 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb 01 15:20:38 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb 01 15:20:38 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:59b862f6-24f2-457d-8400-334f4d4f6ea3_3c99591e-4443-46aa-892b-59f2735dca00, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:38 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "59b862f6-24f2-457d-8400-334f4d4f6ea3", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:38 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:59b862f6-24f2-457d-8400-334f4d4f6ea3, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:38 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb 01 15:20:38 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb 01 15:20:38 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:59b862f6-24f2-457d-8400-334f4d4f6ea3, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:38 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 54 KiB/s wr, 3 op/s
Feb 01 15:20:39 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "59b862f6-24f2-457d-8400-334f4d4f6ea3_3c99591e-4443-46aa-892b-59f2735dca00", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:39 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "59b862f6-24f2-457d-8400-334f4d4f6ea3", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:39 compute-0 ceph-mon[75179]: pgmap v1068: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 54 KiB/s wr, 3 op/s
Feb 01 15:20:40 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 54 KiB/s wr, 3 op/s
Feb 01 15:20:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Feb 01 15:20:40 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Feb 01 15:20:40 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Feb 01 15:20:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:20:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Feb 01 15:20:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Feb 01 15:20:41 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Feb 01 15:20:41 compute-0 ceph-mon[75179]: pgmap v1069: 305 pgs: 305 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 54 KiB/s wr, 3 op/s
Feb 01 15:20:41 compute-0 ceph-mon[75179]: osdmap e168: 3 total, 3 up, 3 in
Feb 01 15:20:41 compute-0 ceph-mon[75179]: osdmap e169: 3 total, 3 up, 3 in
Feb 01 15:20:42 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 60 KiB/s wr, 4 op/s
Feb 01 15:20:43 compute-0 ceph-mon[75179]: pgmap v1072: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 60 KiB/s wr, 4 op/s
Feb 01 15:20:44 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "57052b66-4ef2-422d-b6cb-d8da260acde1_62d967c2-993a-452f-a738-a621dc2deead", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:44 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:57052b66-4ef2-422d-b6cb-d8da260acde1_62d967c2-993a-452f-a738-a621dc2deead, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:44 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb 01 15:20:44 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb 01 15:20:44 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:57052b66-4ef2-422d-b6cb-d8da260acde1_62d967c2-993a-452f-a738-a621dc2deead, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:44 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "57052b66-4ef2-422d-b6cb-d8da260acde1", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:44 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:57052b66-4ef2-422d-b6cb-d8da260acde1, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:44 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp'
Feb 01 15:20:44 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta.tmp' to config b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd/.meta'
Feb 01 15:20:44 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:57052b66-4ef2-422d-b6cb-d8da260acde1, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:44 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 41 KiB/s wr, 3 op/s
Feb 01 15:20:44 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "format": "json"}]: dispatch
Feb 01 15:20:44 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:20:44 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:20:44 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd' of type subvolume
Feb 01 15:20:44 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:20:44.825+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd' of type subvolume
Feb 01 15:20:44 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:44 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:44 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd'' moved to trashcan
Feb 01 15:20:44 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:20:44 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd, vol_name:cephfs) < ""
Feb 01 15:20:45 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "57052b66-4ef2-422d-b6cb-d8da260acde1_62d967c2-993a-452f-a738-a621dc2deead", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:45 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "snap_name": "57052b66-4ef2-422d-b6cb-d8da260acde1", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:45 compute-0 ceph-mon[75179]: pgmap v1073: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 41 KiB/s wr, 3 op/s
Feb 01 15:20:45 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "format": "json"}]: dispatch
Feb 01 15:20:45 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "eae18ccb-38ff-41a6-9b8e-4ea12cbf2edd", "force": true, "format": "json"}]: dispatch
Feb 01 15:20:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.331739) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959246331774, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1123, "num_deletes": 261, "total_data_size": 1389898, "memory_usage": 1418272, "flush_reason": "Manual Compaction"}
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959246339487, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1374620, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23900, "largest_seqno": 25022, "table_properties": {"data_size": 1369076, "index_size": 2812, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13005, "raw_average_key_size": 20, "raw_value_size": 1357351, "raw_average_value_size": 2097, "num_data_blocks": 125, "num_entries": 647, "num_filter_entries": 647, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769959179, "oldest_key_time": 1769959179, "file_creation_time": 1769959246, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 7785 microseconds, and 3982 cpu microseconds.
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.339525) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1374620 bytes OK
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.339545) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.341578) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.341594) EVENT_LOG_v1 {"time_micros": 1769959246341588, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.341612) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1384380, prev total WAL file size 1384380, number of live WAL files 2.
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.342126) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373534' seq:0, type:0; will stop at (end)
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1342KB)], [53(8436KB)]
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959246342168, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10013445, "oldest_snapshot_seqno": -1}
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5551 keys, 9911389 bytes, temperature: kUnknown
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959246391397, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9911389, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9870828, "index_size": 25603, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13893, "raw_key_size": 138453, "raw_average_key_size": 24, "raw_value_size": 9767764, "raw_average_value_size": 1759, "num_data_blocks": 1064, "num_entries": 5551, "num_filter_entries": 5551, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769959246, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.391660) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9911389 bytes
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.393667) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.1 rd, 201.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.2 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(14.5) write-amplify(7.2) OK, records in: 6093, records dropped: 542 output_compression: NoCompression
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.393686) EVENT_LOG_v1 {"time_micros": 1769959246393677, "job": 28, "event": "compaction_finished", "compaction_time_micros": 49297, "compaction_time_cpu_micros": 20136, "output_level": 6, "num_output_files": 1, "total_output_size": 9911389, "num_input_records": 6093, "num_output_records": 5551, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959246393933, "job": 28, "event": "table_file_deletion", "file_number": 55}
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959246395008, "job": 28, "event": "table_file_deletion", "file_number": 53}
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.342012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.395119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.395128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.395132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.395135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:20:46 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:20:46.395138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:20:46 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 78 KiB/s wr, 5 op/s
Feb 01 15:20:47 compute-0 ceph-mon[75179]: pgmap v1074: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 78 KiB/s wr, 5 op/s
Feb 01 15:20:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 78 KiB/s wr, 5 op/s
Feb 01 15:20:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:20:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:20:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:20:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:20:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:20:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:20:49 compute-0 nova_compute[238794]: 2026-02-01 15:20:49.669 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:20:49 compute-0 ceph-mon[75179]: pgmap v1075: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 78 KiB/s wr, 5 op/s
Feb 01 15:20:50 compute-0 nova_compute[238794]: 2026-02-01 15:20:50.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:20:50 compute-0 nova_compute[238794]: 2026-02-01 15:20:50.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:20:50 compute-0 nova_compute[238794]: 2026-02-01 15:20:50.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 01 15:20:50 compute-0 nova_compute[238794]: 2026-02-01 15:20:50.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 01 15:20:50 compute-0 nova_compute[238794]: 2026-02-01 15:20:50.336 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 01 15:20:50 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s wr, 3 op/s
Feb 01 15:20:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Feb 01 15:20:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Feb 01 15:20:50 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Feb 01 15:20:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 01 15:20:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1226999884' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:20:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 01 15:20:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1226999884' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:20:51 compute-0 nova_compute[238794]: 2026-02-01 15:20:51.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:20:51 compute-0 nova_compute[238794]: 2026-02-01 15:20:51.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:20:51 compute-0 nova_compute[238794]: 2026-02-01 15:20:51.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 01 15:20:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:20:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Feb 01 15:20:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Feb 01 15:20:51 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Feb 01 15:20:51 compute-0 ceph-mon[75179]: pgmap v1076: 305 pgs: 305 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s wr, 3 op/s
Feb 01 15:20:51 compute-0 ceph-mon[75179]: osdmap e170: 3 total, 3 up, 3 in
Feb 01 15:20:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/1226999884' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:20:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/1226999884' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:20:51 compute-0 ceph-mon[75179]: osdmap e171: 3 total, 3 up, 3 in
Feb 01 15:20:52 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 62 KiB/s wr, 5 op/s
Feb 01 15:20:53 compute-0 ceph-mon[75179]: pgmap v1079: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 62 KiB/s wr, 5 op/s
Feb 01 15:20:54 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 25 KiB/s wr, 2 op/s
Feb 01 15:20:55 compute-0 nova_compute[238794]: 2026-02-01 15:20:55.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:20:55 compute-0 nova_compute[238794]: 2026-02-01 15:20:55.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:20:55 compute-0 ceph-mon[75179]: pgmap v1080: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 25 KiB/s wr, 2 op/s
Feb 01 15:20:56 compute-0 nova_compute[238794]: 2026-02-01 15:20:56.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:20:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:20:56 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 33 KiB/s wr, 3 op/s
Feb 01 15:20:57 compute-0 ceph-mon[75179]: pgmap v1081: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 33 KiB/s wr, 3 op/s
Feb 01 15:20:58 compute-0 nova_compute[238794]: 2026-02-01 15:20:58.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:20:58 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 33 KiB/s wr, 3 op/s
Feb 01 15:20:59 compute-0 nova_compute[238794]: 2026-02-01 15:20:59.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:20:59 compute-0 nova_compute[238794]: 2026-02-01 15:20:59.353 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:20:59 compute-0 nova_compute[238794]: 2026-02-01 15:20:59.354 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:20:59 compute-0 nova_compute[238794]: 2026-02-01 15:20:59.354 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:20:59 compute-0 nova_compute[238794]: 2026-02-01 15:20:59.354 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 01 15:20:59 compute-0 nova_compute[238794]: 2026-02-01 15:20:59.354 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:20:59 compute-0 ceph-mon[75179]: pgmap v1082: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 33 KiB/s wr, 3 op/s
Feb 01 15:20:59 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:20:59 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1885866586' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:20:59 compute-0 nova_compute[238794]: 2026-02-01 15:20:59.880 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:21:00 compute-0 nova_compute[238794]: 2026-02-01 15:21:00.052 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 01 15:21:00 compute-0 nova_compute[238794]: 2026-02-01 15:21:00.053 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5044MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 01 15:21:00 compute-0 nova_compute[238794]: 2026-02-01 15:21:00.053 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:21:00 compute-0 nova_compute[238794]: 2026-02-01 15:21:00.054 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:21:00 compute-0 nova_compute[238794]: 2026-02-01 15:21:00.131 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 01 15:21:00 compute-0 nova_compute[238794]: 2026-02-01 15:21:00.131 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 01 15:21:00 compute-0 nova_compute[238794]: 2026-02-01 15:21:00.149 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:21:00 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s wr, 1 op/s
Feb 01 15:21:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:21:00 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2684780358' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:21:00 compute-0 nova_compute[238794]: 2026-02-01 15:21:00.696 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:21:00 compute-0 nova_compute[238794]: 2026-02-01 15:21:00.702 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 01 15:21:00 compute-0 nova_compute[238794]: 2026-02-01 15:21:00.717 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 01 15:21:00 compute-0 nova_compute[238794]: 2026-02-01 15:21:00.720 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 01 15:21:00 compute-0 nova_compute[238794]: 2026-02-01 15:21:00.720 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:21:00 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1885866586' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:21:00 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2684780358' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:21:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:21:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Feb 01 15:21:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Feb 01 15:21:01 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Feb 01 15:21:01 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 15:21:01 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 8950 writes, 33K keys, 8950 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 8950 writes, 2269 syncs, 3.94 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3087 writes, 8468 keys, 3087 commit groups, 1.0 writes per commit group, ingest: 9.99 MB, 0.02 MB/s
                                           Interval WAL: 3087 writes, 1257 syncs, 2.46 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 01 15:21:01 compute-0 ceph-mon[75179]: pgmap v1083: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s wr, 1 op/s
Feb 01 15:21:01 compute-0 ceph-mon[75179]: osdmap e172: 3 total, 3 up, 3 in
Feb 01 15:21:02 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s wr, 0 op/s
Feb 01 15:21:03 compute-0 ceph-mon[75179]: pgmap v1085: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s wr, 0 op/s
Feb 01 15:21:04 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s wr, 0 op/s
Feb 01 15:21:04 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 15:21:04 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 14K writes, 54K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s
                                           Cumulative WAL: 14K writes, 4598 syncs, 3.14 writes per sync, written: 0.05 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7289 writes, 24K keys, 7289 commit groups, 1.0 writes per commit group, ingest: 34.81 MB, 0.06 MB/s
                                           Interval WAL: 7289 writes, 3168 syncs, 2.30 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 01 15:21:05 compute-0 ceph-mon[75179]: pgmap v1086: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s wr, 0 op/s
Feb 01 15:21:05 compute-0 podman[249530]: 2026-02-01 15:21:05.981796225 +0000 UTC m=+0.061509326 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 01 15:21:06 compute-0 podman[249531]: 2026-02-01 15:21:06.061394925 +0000 UTC m=+0.138846793 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true)
Feb 01 15:21:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:21:06 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s wr, 0 op/s
Feb 01 15:21:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:21:07.817 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:21:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:21:07.818 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:21:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:21:07.818 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:21:07 compute-0 ceph-mon[75179]: pgmap v1087: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s wr, 0 op/s
Feb 01 15:21:08 compute-0 sudo[249572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:21:08 compute-0 sudo[249572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:21:08 compute-0 sudo[249572]: pam_unix(sudo:session): session closed for user root
Feb 01 15:21:08 compute-0 sudo[249597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 15:21:08 compute-0 sudo[249597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:21:08 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s wr, 0 op/s
Feb 01 15:21:08 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 15:21:08 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 8843 writes, 32K keys, 8843 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 8843 writes, 2113 syncs, 4.19 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3112 writes, 8422 keys, 3112 commit groups, 1.0 writes per commit group, ingest: 8.08 MB, 0.01 MB/s
                                           Interval WAL: 3112 writes, 1189 syncs, 2.62 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 01 15:21:08 compute-0 sudo[249597]: pam_unix(sudo:session): session closed for user root
Feb 01 15:21:08 compute-0 sudo[249655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:21:08 compute-0 sudo[249655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:21:08 compute-0 sudo[249655]: pam_unix(sudo:session): session closed for user root
Feb 01 15:21:08 compute-0 sudo[249680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Feb 01 15:21:08 compute-0 sudo[249680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:21:09 compute-0 sudo[249680]: pam_unix(sudo:session): session closed for user root
Feb 01 15:21:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:21:09 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:21:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:21:09 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:21:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:21:09 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:21:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 15:21:09 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:21:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 15:21:09 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:21:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 15:21:09 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:21:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 15:21:09 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:21:09 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:21:09 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:21:09 compute-0 sudo[249723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:21:09 compute-0 sudo[249723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:21:09 compute-0 sudo[249723]: pam_unix(sudo:session): session closed for user root
Feb 01 15:21:09 compute-0 sudo[249748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 15:21:09 compute-0 sudo[249748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:21:09 compute-0 podman[249785]: 2026-02-01 15:21:09.597542808 +0000 UTC m=+0.033255128 container create ca450cbf09486027c315092ef3245f2cc9b36361c433e641fab5752baac3418a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:21:09 compute-0 systemd[1]: Started libpod-conmon-ca450cbf09486027c315092ef3245f2cc9b36361c433e641fab5752baac3418a.scope.
Feb 01 15:21:09 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:21:09 compute-0 podman[249785]: 2026-02-01 15:21:09.671328846 +0000 UTC m=+0.107041176 container init ca450cbf09486027c315092ef3245f2cc9b36361c433e641fab5752baac3418a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 01 15:21:09 compute-0 podman[249785]: 2026-02-01 15:21:09.580942245 +0000 UTC m=+0.016654565 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:21:09 compute-0 podman[249785]: 2026-02-01 15:21:09.680256865 +0000 UTC m=+0.115969195 container start ca450cbf09486027c315092ef3245f2cc9b36361c433e641fab5752baac3418a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hopper, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 01 15:21:09 compute-0 podman[249785]: 2026-02-01 15:21:09.683122965 +0000 UTC m=+0.118835305 container attach ca450cbf09486027c315092ef3245f2cc9b36361c433e641fab5752baac3418a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hopper, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 01 15:21:09 compute-0 hungry_hopper[249801]: 167 167
Feb 01 15:21:09 compute-0 systemd[1]: libpod-ca450cbf09486027c315092ef3245f2cc9b36361c433e641fab5752baac3418a.scope: Deactivated successfully.
Feb 01 15:21:09 compute-0 podman[249785]: 2026-02-01 15:21:09.686428597 +0000 UTC m=+0.122140957 container died ca450cbf09486027c315092ef3245f2cc9b36361c433e641fab5752baac3418a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 01 15:21:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f705469dce8e61c26e80b5b04f16e31a91fa210f9c83d7490f9e735b43054f5-merged.mount: Deactivated successfully.
Feb 01 15:21:09 compute-0 podman[249785]: 2026-02-01 15:21:09.732722858 +0000 UTC m=+0.168435178 container remove ca450cbf09486027c315092ef3245f2cc9b36361c433e641fab5752baac3418a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hopper, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:21:09 compute-0 systemd[1]: libpod-conmon-ca450cbf09486027c315092ef3245f2cc9b36361c433e641fab5752baac3418a.scope: Deactivated successfully.
Feb 01 15:21:09 compute-0 podman[249824]: 2026-02-01 15:21:09.869060311 +0000 UTC m=+0.045390367 container create a81ce95388f408798fc41d8163b24485b3d4fe1b7dfa3a3e6d6799486827b523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Feb 01 15:21:09 compute-0 systemd[1]: Started libpod-conmon-a81ce95388f408798fc41d8163b24485b3d4fe1b7dfa3a3e6d6799486827b523.scope.
Feb 01 15:21:09 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:21:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76edc611d914ea1d600fc91a2c156ffd99eb8d65b729becbf5cea17b4167831c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:21:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76edc611d914ea1d600fc91a2c156ffd99eb8d65b729becbf5cea17b4167831c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:21:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76edc611d914ea1d600fc91a2c156ffd99eb8d65b729becbf5cea17b4167831c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:21:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76edc611d914ea1d600fc91a2c156ffd99eb8d65b729becbf5cea17b4167831c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:21:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76edc611d914ea1d600fc91a2c156ffd99eb8d65b729becbf5cea17b4167831c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 15:21:09 compute-0 podman[249824]: 2026-02-01 15:21:09.848632691 +0000 UTC m=+0.024962757 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:21:09 compute-0 podman[249824]: 2026-02-01 15:21:09.967075114 +0000 UTC m=+0.143405170 container init a81ce95388f408798fc41d8163b24485b3d4fe1b7dfa3a3e6d6799486827b523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_archimedes, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 01 15:21:09 compute-0 podman[249824]: 2026-02-01 15:21:09.979730857 +0000 UTC m=+0.156060883 container start a81ce95388f408798fc41d8163b24485b3d4fe1b7dfa3a3e6d6799486827b523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_archimedes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:21:09 compute-0 podman[249824]: 2026-02-01 15:21:09.98341619 +0000 UTC m=+0.159746216 container attach a81ce95388f408798fc41d8163b24485b3d4fe1b7dfa3a3e6d6799486827b523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_archimedes, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:21:10 compute-0 ceph-mon[75179]: pgmap v1088: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s wr, 0 op/s
Feb 01 15:21:10 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:21:10 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:21:10 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:21:10 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:21:10 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:21:10 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:21:10 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:21:10 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:21:10 compute-0 kind_archimedes[249841]: --> passed data devices: 0 physical, 3 LVM
Feb 01 15:21:10 compute-0 kind_archimedes[249841]: --> All data devices are unavailable
Feb 01 15:21:10 compute-0 systemd[1]: libpod-a81ce95388f408798fc41d8163b24485b3d4fe1b7dfa3a3e6d6799486827b523.scope: Deactivated successfully.
Feb 01 15:21:10 compute-0 podman[249824]: 2026-02-01 15:21:10.455801445 +0000 UTC m=+0.632131521 container died a81ce95388f408798fc41d8163b24485b3d4fe1b7dfa3a3e6d6799486827b523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_archimedes, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Feb 01 15:21:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-76edc611d914ea1d600fc91a2c156ffd99eb8d65b729becbf5cea17b4167831c-merged.mount: Deactivated successfully.
Feb 01 15:21:10 compute-0 podman[249824]: 2026-02-01 15:21:10.501242611 +0000 UTC m=+0.677572637 container remove a81ce95388f408798fc41d8163b24485b3d4fe1b7dfa3a3e6d6799486827b523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_archimedes, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 01 15:21:10 compute-0 systemd[1]: libpod-conmon-a81ce95388f408798fc41d8163b24485b3d4fe1b7dfa3a3e6d6799486827b523.scope: Deactivated successfully.
Feb 01 15:21:10 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s wr, 0 op/s
Feb 01 15:21:10 compute-0 sudo[249748]: pam_unix(sudo:session): session closed for user root
Feb 01 15:21:10 compute-0 sudo[249874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:21:10 compute-0 sudo[249874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:21:10 compute-0 sudo[249874]: pam_unix(sudo:session): session closed for user root
Feb 01 15:21:10 compute-0 sudo[249899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 15:21:10 compute-0 sudo[249899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:21:10 compute-0 podman[249937]: 2026-02-01 15:21:10.936013947 +0000 UTC m=+0.038546086 container create a122f1c47f48985df5e564183177b8e699e88f32014b260cf6a113441991b1d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 01 15:21:10 compute-0 systemd[1]: Started libpod-conmon-a122f1c47f48985df5e564183177b8e699e88f32014b260cf6a113441991b1d0.scope.
Feb 01 15:21:10 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:21:10 compute-0 podman[249937]: 2026-02-01 15:21:10.999479157 +0000 UTC m=+0.102011286 container init a122f1c47f48985df5e564183177b8e699e88f32014b260cf6a113441991b1d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jones, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:21:11 compute-0 podman[249937]: 2026-02-01 15:21:11.007604364 +0000 UTC m=+0.110136533 container start a122f1c47f48985df5e564183177b8e699e88f32014b260cf6a113441991b1d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jones, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:21:11 compute-0 friendly_jones[249953]: 167 167
Feb 01 15:21:11 compute-0 systemd[1]: libpod-a122f1c47f48985df5e564183177b8e699e88f32014b260cf6a113441991b1d0.scope: Deactivated successfully.
Feb 01 15:21:11 compute-0 podman[249937]: 2026-02-01 15:21:11.011687898 +0000 UTC m=+0.114220187 container attach a122f1c47f48985df5e564183177b8e699e88f32014b260cf6a113441991b1d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jones, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:21:11 compute-0 podman[249937]: 2026-02-01 15:21:11.012762868 +0000 UTC m=+0.115295037 container died a122f1c47f48985df5e564183177b8e699e88f32014b260cf6a113441991b1d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jones, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 01 15:21:11 compute-0 podman[249937]: 2026-02-01 15:21:10.920155705 +0000 UTC m=+0.022687874 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:21:11 compute-0 ceph-mgr[75469]: [devicehealth INFO root] Check health
Feb 01 15:21:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c6a484e1c81a561034788abb0c257cac017b11f36747f4a74e5804edd0ecbeb-merged.mount: Deactivated successfully.
Feb 01 15:21:11 compute-0 podman[249937]: 2026-02-01 15:21:11.048502395 +0000 UTC m=+0.151034524 container remove a122f1c47f48985df5e564183177b8e699e88f32014b260cf6a113441991b1d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_jones, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:21:11 compute-0 systemd[1]: libpod-conmon-a122f1c47f48985df5e564183177b8e699e88f32014b260cf6a113441991b1d0.scope: Deactivated successfully.
Feb 01 15:21:11 compute-0 podman[249976]: 2026-02-01 15:21:11.192736867 +0000 UTC m=+0.042038573 container create 38422aa712785c1319d65711ade8f91858b1ce09d8bbf5e544a2650091c791bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_fermat, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 01 15:21:11 compute-0 systemd[1]: Started libpod-conmon-38422aa712785c1319d65711ade8f91858b1ce09d8bbf5e544a2650091c791bd.scope.
Feb 01 15:21:11 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d60116c68e36aaa75453638fcb7d09cd99653597bceee1e588252c796342f93d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d60116c68e36aaa75453638fcb7d09cd99653597bceee1e588252c796342f93d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d60116c68e36aaa75453638fcb7d09cd99653597bceee1e588252c796342f93d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d60116c68e36aaa75453638fcb7d09cd99653597bceee1e588252c796342f93d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:21:11 compute-0 podman[249976]: 2026-02-01 15:21:11.174332004 +0000 UTC m=+0.023633740 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:21:11 compute-0 podman[249976]: 2026-02-01 15:21:11.271938976 +0000 UTC m=+0.121240702 container init 38422aa712785c1319d65711ade8f91858b1ce09d8bbf5e544a2650091c791bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 01 15:21:11 compute-0 podman[249976]: 2026-02-01 15:21:11.277658856 +0000 UTC m=+0.126960562 container start 38422aa712785c1319d65711ade8f91858b1ce09d8bbf5e544a2650091c791bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_fermat, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:21:11 compute-0 podman[249976]: 2026-02-01 15:21:11.281343399 +0000 UTC m=+0.130645115 container attach 38422aa712785c1319d65711ade8f91858b1ce09d8bbf5e544a2650091c791bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 01 15:21:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]: {
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:     "0": [
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:         {
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "devices": [
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "/dev/loop3"
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             ],
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "lv_name": "ceph_lv0",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "lv_size": "21470642176",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "name": "ceph_lv0",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "tags": {
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.cluster_name": "ceph",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.crush_device_class": "",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.encrypted": "0",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.objectstore": "bluestore",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.osd_id": "0",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.type": "block",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.vdo": "0",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.with_tpm": "0"
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             },
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "type": "block",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "vg_name": "ceph_vg0"
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:         }
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:     ],
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:     "1": [
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:         {
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "devices": [
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "/dev/loop4"
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             ],
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "lv_name": "ceph_lv1",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "lv_size": "21470642176",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "name": "ceph_lv1",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "tags": {
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.cluster_name": "ceph",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.crush_device_class": "",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.encrypted": "0",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.objectstore": "bluestore",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.osd_id": "1",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.type": "block",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.vdo": "0",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.with_tpm": "0"
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             },
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "type": "block",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "vg_name": "ceph_vg1"
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:         }
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:     ],
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:     "2": [
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:         {
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "devices": [
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "/dev/loop5"
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             ],
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "lv_name": "ceph_lv2",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "lv_size": "21470642176",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "name": "ceph_lv2",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "tags": {
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.cluster_name": "ceph",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.crush_device_class": "",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.encrypted": "0",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.objectstore": "bluestore",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.osd_id": "2",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.type": "block",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.vdo": "0",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:                 "ceph.with_tpm": "0"
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             },
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "type": "block",
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:             "vg_name": "ceph_vg2"
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:         }
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]:     ]
Feb 01 15:21:11 compute-0 peaceful_fermat[249992]: }
Feb 01 15:21:11 compute-0 systemd[1]: libpod-38422aa712785c1319d65711ade8f91858b1ce09d8bbf5e544a2650091c791bd.scope: Deactivated successfully.
Feb 01 15:21:11 compute-0 podman[249976]: 2026-02-01 15:21:11.574808993 +0000 UTC m=+0.424110729 container died 38422aa712785c1319d65711ade8f91858b1ce09d8bbf5e544a2650091c791bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_fermat, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 01 15:21:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-d60116c68e36aaa75453638fcb7d09cd99653597bceee1e588252c796342f93d-merged.mount: Deactivated successfully.
Feb 01 15:21:11 compute-0 podman[249976]: 2026-02-01 15:21:11.629468008 +0000 UTC m=+0.478769744 container remove 38422aa712785c1319d65711ade8f91858b1ce09d8bbf5e544a2650091c791bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_fermat, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Feb 01 15:21:11 compute-0 systemd[1]: libpod-conmon-38422aa712785c1319d65711ade8f91858b1ce09d8bbf5e544a2650091c791bd.scope: Deactivated successfully.
Feb 01 15:21:11 compute-0 sudo[249899]: pam_unix(sudo:session): session closed for user root
Feb 01 15:21:11 compute-0 sudo[250014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:21:11 compute-0 sudo[250014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:21:11 compute-0 sudo[250014]: pam_unix(sudo:session): session closed for user root
Feb 01 15:21:11 compute-0 sudo[250039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 15:21:11 compute-0 sudo[250039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:21:12 compute-0 podman[250075]: 2026-02-01 15:21:12.105621538 +0000 UTC m=+0.050792838 container create 663c6552b6544e8f2e786af5c19297546263491a50860998818748b7148e76c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:21:12 compute-0 systemd[1]: Started libpod-conmon-663c6552b6544e8f2e786af5c19297546263491a50860998818748b7148e76c7.scope.
Feb 01 15:21:12 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:21:12 compute-0 podman[250075]: 2026-02-01 15:21:12.079467398 +0000 UTC m=+0.024638728 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:21:12 compute-0 podman[250075]: 2026-02-01 15:21:12.17416206 +0000 UTC m=+0.119333390 container init 663c6552b6544e8f2e786af5c19297546263491a50860998818748b7148e76c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:21:12 compute-0 ceph-mon[75179]: pgmap v1089: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s wr, 0 op/s
Feb 01 15:21:12 compute-0 podman[250075]: 2026-02-01 15:21:12.180806565 +0000 UTC m=+0.125977815 container start 663c6552b6544e8f2e786af5c19297546263491a50860998818748b7148e76c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:21:12 compute-0 modest_greider[250092]: 167 167
Feb 01 15:21:12 compute-0 systemd[1]: libpod-663c6552b6544e8f2e786af5c19297546263491a50860998818748b7148e76c7.scope: Deactivated successfully.
Feb 01 15:21:12 compute-0 conmon[250092]: conmon 663c6552b6544e8f2e78 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-663c6552b6544e8f2e786af5c19297546263491a50860998818748b7148e76c7.scope/container/memory.events
Feb 01 15:21:12 compute-0 podman[250075]: 2026-02-01 15:21:12.186350329 +0000 UTC m=+0.131521679 container attach 663c6552b6544e8f2e786af5c19297546263491a50860998818748b7148e76c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:21:12 compute-0 podman[250075]: 2026-02-01 15:21:12.186763051 +0000 UTC m=+0.131934331 container died 663c6552b6544e8f2e786af5c19297546263491a50860998818748b7148e76c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 01 15:21:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-14e8ced847417b856ff804617db75e046b9227eee190c037fbfa1b62d8c46dff-merged.mount: Deactivated successfully.
Feb 01 15:21:12 compute-0 podman[250075]: 2026-02-01 15:21:12.227861687 +0000 UTC m=+0.173032937 container remove 663c6552b6544e8f2e786af5c19297546263491a50860998818748b7148e76c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:21:12 compute-0 systemd[1]: libpod-conmon-663c6552b6544e8f2e786af5c19297546263491a50860998818748b7148e76c7.scope: Deactivated successfully.
Feb 01 15:21:12 compute-0 podman[250115]: 2026-02-01 15:21:12.406496169 +0000 UTC m=+0.059283854 container create 6d2ce95d6c16d18ffd9cff9af0c2d268ba9c1df1de91b1d4511a7c37d3870f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 01 15:21:12 compute-0 systemd[1]: Started libpod-conmon-6d2ce95d6c16d18ffd9cff9af0c2d268ba9c1df1de91b1d4511a7c37d3870f2c.scope.
Feb 01 15:21:12 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:21:12 compute-0 podman[250115]: 2026-02-01 15:21:12.382236583 +0000 UTC m=+0.035024358 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80111781ae703b2c44f7590e8e8693ead6f1646a4760d169b3956904011f4358/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80111781ae703b2c44f7590e8e8693ead6f1646a4760d169b3956904011f4358/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80111781ae703b2c44f7590e8e8693ead6f1646a4760d169b3956904011f4358/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80111781ae703b2c44f7590e8e8693ead6f1646a4760d169b3956904011f4358/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:21:12 compute-0 podman[250115]: 2026-02-01 15:21:12.507713342 +0000 UTC m=+0.160501067 container init 6d2ce95d6c16d18ffd9cff9af0c2d268ba9c1df1de91b1d4511a7c37d3870f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 01 15:21:12 compute-0 podman[250115]: 2026-02-01 15:21:12.51622024 +0000 UTC m=+0.169007925 container start 6d2ce95d6c16d18ffd9cff9af0c2d268ba9c1df1de91b1d4511a7c37d3870f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Feb 01 15:21:12 compute-0 podman[250115]: 2026-02-01 15:21:12.519259994 +0000 UTC m=+0.172047709 container attach 6d2ce95d6c16d18ffd9cff9af0c2d268ba9c1df1de91b1d4511a7c37d3870f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_galois, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:21:12 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:13 compute-0 lvm[250211]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:21:13 compute-0 lvm[250211]: VG ceph_vg0 finished
Feb 01 15:21:13 compute-0 lvm[250212]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:21:13 compute-0 lvm[250212]: VG ceph_vg1 finished
Feb 01 15:21:13 compute-0 lvm[250214]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:21:13 compute-0 lvm[250214]: VG ceph_vg2 finished
Feb 01 15:21:13 compute-0 trusting_galois[250132]: {}
Feb 01 15:21:13 compute-0 systemd[1]: libpod-6d2ce95d6c16d18ffd9cff9af0c2d268ba9c1df1de91b1d4511a7c37d3870f2c.scope: Deactivated successfully.
Feb 01 15:21:13 compute-0 podman[250217]: 2026-02-01 15:21:13.303399774 +0000 UTC m=+0.033564857 container died 6d2ce95d6c16d18ffd9cff9af0c2d268ba9c1df1de91b1d4511a7c37d3870f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 01 15:21:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-80111781ae703b2c44f7590e8e8693ead6f1646a4760d169b3956904011f4358-merged.mount: Deactivated successfully.
Feb 01 15:21:13 compute-0 podman[250217]: 2026-02-01 15:21:13.336758314 +0000 UTC m=+0.066923387 container remove 6d2ce95d6c16d18ffd9cff9af0c2d268ba9c1df1de91b1d4511a7c37d3870f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_galois, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 01 15:21:13 compute-0 systemd[1]: libpod-conmon-6d2ce95d6c16d18ffd9cff9af0c2d268ba9c1df1de91b1d4511a7c37d3870f2c.scope: Deactivated successfully.
Feb 01 15:21:13 compute-0 sudo[250039]: pam_unix(sudo:session): session closed for user root
Feb 01 15:21:13 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:21:13 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:21:13 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:21:13 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:21:13 compute-0 sudo[250232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 15:21:13 compute-0 sudo[250232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:21:13 compute-0 sudo[250232]: pam_unix(sudo:session): session closed for user root
Feb 01 15:21:14 compute-0 ceph-mon[75179]: pgmap v1090: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:14 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:21:14 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:21:14 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:16 compute-0 ceph-mon[75179]: pgmap v1091: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:21:16 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:21:17
Feb 01 15:21:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:21:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:21:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'images', 'backups', 'vms']
Feb 01 15:21:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:21:18 compute-0 ceph-mon[75179]: pgmap v1092: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:18 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:21:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:21:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:21:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:21:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:21:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:21:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:21:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:21:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:21:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:21:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:21:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:21:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:21:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:21:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:21:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:21:20 compute-0 ceph-mon[75179]: pgmap v1093: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:20 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:21 compute-0 ceph-mon[75179]: pgmap v1094: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:21:21 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:21:21.598 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:64:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '76:0c:13:64:99:39'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 01 15:21:21 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:21:21.600 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 01 15:21:22 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:23 compute-0 ceph-mon[75179]: pgmap v1095: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:24 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:24 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:21:24.603 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 01 15:21:25 compute-0 ceph-mon[75179]: pgmap v1096: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:21:26 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:27 compute-0 ceph-mon[75179]: pgmap v1097: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659717898882094 of space, bias 1.0, pg target 0.19979153696646282 quantized to 32 (current 32)
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005527671647039236 of space, bias 4.0, pg target 0.6633205976447083 quantized to 16 (current 16)
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:21:28 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:29 compute-0 ceph-mon[75179]: pgmap v1098: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:30 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:21:31 compute-0 ceph-mon[75179]: pgmap v1099: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:32 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:33 compute-0 ceph-mon[75179]: pgmap v1100: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:34 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:35 compute-0 ceph-mon[75179]: pgmap v1101: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:21:36 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:37 compute-0 podman[250257]: 2026-02-01 15:21:37.005996537 +0000 UTC m=+0.079297672 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:21:37 compute-0 podman[250258]: 2026-02-01 15:21:37.045984302 +0000 UTC m=+0.117732344 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Feb 01 15:21:37 compute-0 ceph-mon[75179]: pgmap v1102: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:38 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:39 compute-0 ceph-mon[75179]: pgmap v1103: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:21:39 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "091d85e3-6421-421c-a022-3095345db8aa", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:21:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb 01 15:21:39 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/091d85e3-6421-421c-a022-3095345db8aa/6f3fb38f-4fb5-428d-af1f-466faa7d1587'.
Feb 01 15:21:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/091d85e3-6421-421c-a022-3095345db8aa/.meta.tmp'
Feb 01 15:21:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/091d85e3-6421-421c-a022-3095345db8aa/.meta.tmp' to config b'/volumes/_nogroup/091d85e3-6421-421c-a022-3095345db8aa/.meta'
Feb 01 15:21:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb 01 15:21:39 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "091d85e3-6421-421c-a022-3095345db8aa", "format": "json"}]: dispatch
Feb 01 15:21:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb 01 15:21:39 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb 01 15:21:39 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:21:39 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:21:40 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Feb 01 15:21:40 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "091d85e3-6421-421c-a022-3095345db8aa", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:21:40 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "091d85e3-6421-421c-a022-3095345db8aa", "format": "json"}]: dispatch
Feb 01 15:21:40 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:21:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:21:41 compute-0 ceph-mon[75179]: pgmap v1104: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Feb 01 15:21:42 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Feb 01 15:21:43 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "091d85e3-6421-421c-a022-3095345db8aa", "snap_name": "0e53ce9d-659d-4efa-bf51-1e666e409ac3", "format": "json"}]: dispatch
Feb 01 15:21:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0e53ce9d-659d-4efa-bf51-1e666e409ac3, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb 01 15:21:43 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0e53ce9d-659d-4efa-bf51-1e666e409ac3, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb 01 15:21:43 compute-0 ceph-mon[75179]: pgmap v1105: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Feb 01 15:21:43 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "091d85e3-6421-421c-a022-3095345db8aa", "snap_name": "0e53ce9d-659d-4efa-bf51-1e666e409ac3", "format": "json"}]: dispatch
Feb 01 15:21:44 compute-0 nova_compute[238794]: 2026-02-01 15:21:44.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:21:44 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Feb 01 15:21:45 compute-0 ceph-mon[75179]: pgmap v1106: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Feb 01 15:21:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:21:46 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s wr, 1 op/s
Feb 01 15:21:47 compute-0 ceph-mon[75179]: pgmap v1107: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s wr, 1 op/s
Feb 01 15:21:48 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "091d85e3-6421-421c-a022-3095345db8aa", "snap_name": "0e53ce9d-659d-4efa-bf51-1e666e409ac3_a8b1ef42-1e25-4f11-8838-77f94c29ebe4", "force": true, "format": "json"}]: dispatch
Feb 01 15:21:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0e53ce9d-659d-4efa-bf51-1e666e409ac3_a8b1ef42-1e25-4f11-8838-77f94c29ebe4, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb 01 15:21:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/091d85e3-6421-421c-a022-3095345db8aa/.meta.tmp'
Feb 01 15:21:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/091d85e3-6421-421c-a022-3095345db8aa/.meta.tmp' to config b'/volumes/_nogroup/091d85e3-6421-421c-a022-3095345db8aa/.meta'
Feb 01 15:21:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0e53ce9d-659d-4efa-bf51-1e666e409ac3_a8b1ef42-1e25-4f11-8838-77f94c29ebe4, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb 01 15:21:48 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "091d85e3-6421-421c-a022-3095345db8aa", "snap_name": "0e53ce9d-659d-4efa-bf51-1e666e409ac3", "force": true, "format": "json"}]: dispatch
Feb 01 15:21:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0e53ce9d-659d-4efa-bf51-1e666e409ac3, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb 01 15:21:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/091d85e3-6421-421c-a022-3095345db8aa/.meta.tmp'
Feb 01 15:21:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/091d85e3-6421-421c-a022-3095345db8aa/.meta.tmp' to config b'/volumes/_nogroup/091d85e3-6421-421c-a022-3095345db8aa/.meta'
Feb 01 15:21:48 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0e53ce9d-659d-4efa-bf51-1e666e409ac3, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb 01 15:21:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s wr, 1 op/s
Feb 01 15:21:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:21:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:21:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:21:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:21:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:21:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f8299156f70>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f825b5e6d90>)]
Feb 01 15:21:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb 01 15:21:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb 01 15:21:49 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "091d85e3-6421-421c-a022-3095345db8aa", "snap_name": "0e53ce9d-659d-4efa-bf51-1e666e409ac3_a8b1ef42-1e25-4f11-8838-77f94c29ebe4", "force": true, "format": "json"}]: dispatch
Feb 01 15:21:49 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "091d85e3-6421-421c-a022-3095345db8aa", "snap_name": "0e53ce9d-659d-4efa-bf51-1e666e409ac3", "force": true, "format": "json"}]: dispatch
Feb 01 15:21:49 compute-0 ceph-mon[75179]: pgmap v1108: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s wr, 1 op/s
Feb 01 15:21:50 compute-0 nova_compute[238794]: 2026-02-01 15:21:50.335 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:21:50 compute-0 nova_compute[238794]: 2026-02-01 15:21:50.335 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 01 15:21:50 compute-0 nova_compute[238794]: 2026-02-01 15:21:50.336 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 01 15:21:50 compute-0 nova_compute[238794]: 2026-02-01 15:21:50.350 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 01 15:21:50 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 33 KiB/s wr, 2 op/s
Feb 01 15:21:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Feb 01 15:21:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Feb 01 15:21:50 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Feb 01 15:21:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 01 15:21:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3227912831' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:21:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 01 15:21:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3227912831' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:21:51 compute-0 nova_compute[238794]: 2026-02-01 15:21:51.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:21:51 compute-0 nova_compute[238794]: 2026-02-01 15:21:51.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:21:51 compute-0 nova_compute[238794]: 2026-02-01 15:21:51.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:21:51 compute-0 nova_compute[238794]: 2026-02-01 15:21:51.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 01 15:21:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:21:51 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "091d85e3-6421-421c-a022-3095345db8aa", "format": "json"}]: dispatch
Feb 01 15:21:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:091d85e3-6421-421c-a022-3095345db8aa, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:21:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:091d85e3-6421-421c-a022-3095345db8aa, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:21:51 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:21:51.657+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '091d85e3-6421-421c-a022-3095345db8aa' of type subvolume
Feb 01 15:21:51 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '091d85e3-6421-421c-a022-3095345db8aa' of type subvolume
Feb 01 15:21:51 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "091d85e3-6421-421c-a022-3095345db8aa", "force": true, "format": "json"}]: dispatch
Feb 01 15:21:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb 01 15:21:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/091d85e3-6421-421c-a022-3095345db8aa'' moved to trashcan
Feb 01 15:21:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:21:51 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:091d85e3-6421-421c-a022-3095345db8aa, vol_name:cephfs) < ""
Feb 01 15:21:51 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:21:51.681+0000 7f8269f87640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:21:51 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:21:51 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:21:51.681+0000 7f8269f87640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:21:51 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:21:51 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:21:51.681+0000 7f8269f87640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:21:51 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:21:51 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:21:51.681+0000 7f8269f87640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:21:51 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:21:51 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:21:51.681+0000 7f8269f87640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:21:51 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:21:51 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:21:51.715+0000 7f8268f85640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:21:51 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:21:51 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:21:51.715+0000 7f8268f85640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:21:51 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:21:51 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:21:51.715+0000 7f8268f85640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:21:51 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:21:51 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:21:51.715+0000 7f8268f85640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:21:51 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:21:51 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:21:51.715+0000 7f8268f85640 -1 client.0 error registering admin socket command: (17) File exists
Feb 01 15:21:51 compute-0 ceph-mgr[75469]: client.0 error registering admin socket command: (17) File exists
Feb 01 15:21:51 compute-0 ceph-mon[75179]: pgmap v1109: 305 pgs: 305 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 33 KiB/s wr, 2 op/s
Feb 01 15:21:51 compute-0 ceph-mon[75179]: osdmap e173: 3 total, 3 up, 3 in
Feb 01 15:21:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3227912831' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:21:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3227912831' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:21:51 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.viosrg(active, since 31m)
Feb 01 15:21:52 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 42 KiB/s wr, 2 op/s
Feb 01 15:21:52 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "091d85e3-6421-421c-a022-3095345db8aa", "format": "json"}]: dispatch
Feb 01 15:21:52 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "091d85e3-6421-421c-a022-3095345db8aa", "force": true, "format": "json"}]: dispatch
Feb 01 15:21:52 compute-0 ceph-mon[75179]: mgrmap e18: compute-0.viosrg(active, since 31m)
Feb 01 15:21:53 compute-0 nova_compute[238794]: 2026-02-01 15:21:53.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:21:53 compute-0 nova_compute[238794]: 2026-02-01 15:21:53.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 01 15:21:53 compute-0 nova_compute[238794]: 2026-02-01 15:21:53.334 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 01 15:21:53 compute-0 ceph-mon[75179]: pgmap v1111: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 42 KiB/s wr, 2 op/s
Feb 01 15:21:53 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.viosrg(active, since 31m)
Feb 01 15:21:54 compute-0 nova_compute[238794]: 2026-02-01 15:21:54.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:21:54 compute-0 nova_compute[238794]: 2026-02-01 15:21:54.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 01 15:21:54 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 42 KiB/s wr, 2 op/s
Feb 01 15:21:54 compute-0 ceph-mon[75179]: mgrmap e19: compute-0.viosrg(active, since 31m)
Feb 01 15:21:55 compute-0 nova_compute[238794]: 2026-02-01 15:21:55.339 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:21:55 compute-0 nova_compute[238794]: 2026-02-01 15:21:55.339 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:21:55 compute-0 ceph-mon[75179]: pgmap v1112: 305 pgs: 305 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 42 KiB/s wr, 2 op/s
Feb 01 15:21:56 compute-0 nova_compute[238794]: 2026-02-01 15:21:56.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:21:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:21:56 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 51 KiB/s wr, 4 op/s
Feb 01 15:21:57 compute-0 ceph-mon[75179]: pgmap v1113: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 51 KiB/s wr, 4 op/s
Feb 01 15:21:58 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 51 KiB/s wr, 4 op/s
Feb 01 15:21:59 compute-0 ceph-mon[75179]: pgmap v1114: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 51 KiB/s wr, 4 op/s
Feb 01 15:22:00 compute-0 nova_compute[238794]: 2026-02-01 15:22:00.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:22:00 compute-0 nova_compute[238794]: 2026-02-01 15:22:00.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:22:00 compute-0 nova_compute[238794]: 2026-02-01 15:22:00.366 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:22:00 compute-0 nova_compute[238794]: 2026-02-01 15:22:00.367 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:22:00 compute-0 nova_compute[238794]: 2026-02-01 15:22:00.368 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:22:00 compute-0 nova_compute[238794]: 2026-02-01 15:22:00.368 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 01 15:22:00 compute-0 nova_compute[238794]: 2026-02-01 15:22:00.369 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:22:00 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 51 KiB/s wr, 33 op/s
Feb 01 15:22:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:22:00 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2068617142' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:22:00 compute-0 nova_compute[238794]: 2026-02-01 15:22:00.900 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:22:01 compute-0 nova_compute[238794]: 2026-02-01 15:22:01.064 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 01 15:22:01 compute-0 nova_compute[238794]: 2026-02-01 15:22:01.065 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5041MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 01 15:22:01 compute-0 nova_compute[238794]: 2026-02-01 15:22:01.066 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:22:01 compute-0 nova_compute[238794]: 2026-02-01 15:22:01.066 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:22:01 compute-0 nova_compute[238794]: 2026-02-01 15:22:01.325 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 01 15:22:01 compute-0 nova_compute[238794]: 2026-02-01 15:22:01.325 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 01 15:22:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:22:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Feb 01 15:22:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Feb 01 15:22:01 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Feb 01 15:22:01 compute-0 nova_compute[238794]: 2026-02-01 15:22:01.388 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Refreshing inventories for resource provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 01 15:22:01 compute-0 nova_compute[238794]: 2026-02-01 15:22:01.523 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Updating ProviderTree inventory for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 01 15:22:01 compute-0 nova_compute[238794]: 2026-02-01 15:22:01.524 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Updating inventory in ProviderTree for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 01 15:22:01 compute-0 nova_compute[238794]: 2026-02-01 15:22:01.536 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Refreshing aggregate associations for resource provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 01 15:22:01 compute-0 nova_compute[238794]: 2026-02-01 15:22:01.556 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Refreshing trait associations for resource provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18, traits: COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX2,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_BMI2,HW_CPU_X86_SSE2,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE42,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 01 15:22:01 compute-0 nova_compute[238794]: 2026-02-01 15:22:01.572 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:22:01 compute-0 ceph-mon[75179]: pgmap v1115: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 51 KiB/s wr, 33 op/s
Feb 01 15:22:01 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2068617142' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:22:01 compute-0 ceph-mon[75179]: osdmap e174: 3 total, 3 up, 3 in
Feb 01 15:22:02 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:22:02 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/937054747' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:22:02 compute-0 nova_compute[238794]: 2026-02-01 15:22:02.120 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:22:02 compute-0 nova_compute[238794]: 2026-02-01 15:22:02.127 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 01 15:22:02 compute-0 nova_compute[238794]: 2026-02-01 15:22:02.145 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 01 15:22:02 compute-0 nova_compute[238794]: 2026-02-01 15:22:02.148 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 01 15:22:02 compute-0 nova_compute[238794]: 2026-02-01 15:22:02.148 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:22:02 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 46 KiB/s wr, 93 op/s
Feb 01 15:22:02 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/937054747' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:22:03 compute-0 ceph-mon[75179]: pgmap v1117: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 46 KiB/s wr, 93 op/s
Feb 01 15:22:04 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 46 KiB/s wr, 93 op/s
Feb 01 15:22:05 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e03e65cf-03e2-407f-9515-a854a7393b45", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:22:05 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb 01 15:22:05 compute-0 ceph-mgr[75469]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/e03e65cf-03e2-407f-9515-a854a7393b45/efb0581a-17af-495b-a4b5-cac17d7af446'.
Feb 01 15:22:05 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e03e65cf-03e2-407f-9515-a854a7393b45/.meta.tmp'
Feb 01 15:22:05 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e03e65cf-03e2-407f-9515-a854a7393b45/.meta.tmp' to config b'/volumes/_nogroup/e03e65cf-03e2-407f-9515-a854a7393b45/.meta'
Feb 01 15:22:05 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb 01 15:22:05 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e03e65cf-03e2-407f-9515-a854a7393b45", "format": "json"}]: dispatch
Feb 01 15:22:05 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb 01 15:22:05 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb 01 15:22:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 01 15:22:05 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:22:05 compute-0 ceph-mon[75179]: pgmap v1118: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 46 KiB/s wr, 93 op/s
Feb 01 15:22:05 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3460515953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 01 15:22:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:22:06 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 24 KiB/s wr, 91 op/s
Feb 01 15:22:06 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e03e65cf-03e2-407f-9515-a854a7393b45", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 01 15:22:06 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e03e65cf-03e2-407f-9515-a854a7393b45", "format": "json"}]: dispatch
Feb 01 15:22:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:22:07.818 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:22:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:22:07.819 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:22:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:22:07.819 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:22:07 compute-0 ceph-mon[75179]: pgmap v1119: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 24 KiB/s wr, 91 op/s
Feb 01 15:22:07 compute-0 podman[250366]: 2026-02-01 15:22:07.976374446 +0000 UTC m=+0.062331248 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 01 15:22:07 compute-0 podman[250367]: 2026-02-01 15:22:07.98615154 +0000 UTC m=+0.075061064 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Feb 01 15:22:08 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 24 KiB/s wr, 91 op/s
Feb 01 15:22:08 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e03e65cf-03e2-407f-9515-a854a7393b45", "snap_name": "c6222879-ed29-4cfb-9aea-5793593bdf51", "format": "json"}]: dispatch
Feb 01 15:22:08 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c6222879-ed29-4cfb-9aea-5793593bdf51, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb 01 15:22:08 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c6222879-ed29-4cfb-9aea-5793593bdf51, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb 01 15:22:09 compute-0 ceph-mon[75179]: pgmap v1120: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 24 KiB/s wr, 91 op/s
Feb 01 15:22:09 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e03e65cf-03e2-407f-9515-a854a7393b45", "snap_name": "c6222879-ed29-4cfb-9aea-5793593bdf51", "format": "json"}]: dispatch
Feb 01 15:22:10 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 24 KiB/s wr, 62 op/s
Feb 01 15:22:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:22:11 compute-0 ceph-mon[75179]: pgmap v1121: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 24 KiB/s wr, 62 op/s
Feb 01 15:22:12 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Feb 01 15:22:13 compute-0 sudo[250410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:22:13 compute-0 sudo[250410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:22:13 compute-0 sudo[250410]: pam_unix(sudo:session): session closed for user root
Feb 01 15:22:13 compute-0 sudo[250435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 01 15:22:13 compute-0 sudo[250435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:22:13 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e03e65cf-03e2-407f-9515-a854a7393b45", "snap_name": "c6222879-ed29-4cfb-9aea-5793593bdf51_b24e846b-f29d-418f-a067-565f2a42532d", "force": true, "format": "json"}]: dispatch
Feb 01 15:22:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c6222879-ed29-4cfb-9aea-5793593bdf51_b24e846b-f29d-418f-a067-565f2a42532d, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb 01 15:22:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e03e65cf-03e2-407f-9515-a854a7393b45/.meta.tmp'
Feb 01 15:22:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e03e65cf-03e2-407f-9515-a854a7393b45/.meta.tmp' to config b'/volumes/_nogroup/e03e65cf-03e2-407f-9515-a854a7393b45/.meta'
Feb 01 15:22:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c6222879-ed29-4cfb-9aea-5793593bdf51_b24e846b-f29d-418f-a067-565f2a42532d, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb 01 15:22:13 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e03e65cf-03e2-407f-9515-a854a7393b45", "snap_name": "c6222879-ed29-4cfb-9aea-5793593bdf51", "force": true, "format": "json"}]: dispatch
Feb 01 15:22:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c6222879-ed29-4cfb-9aea-5793593bdf51, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb 01 15:22:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e03e65cf-03e2-407f-9515-a854a7393b45/.meta.tmp'
Feb 01 15:22:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e03e65cf-03e2-407f-9515-a854a7393b45/.meta.tmp' to config b'/volumes/_nogroup/e03e65cf-03e2-407f-9515-a854a7393b45/.meta'
Feb 01 15:22:13 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c6222879-ed29-4cfb-9aea-5793593bdf51, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb 01 15:22:13 compute-0 sudo[250435]: pam_unix(sudo:session): session closed for user root
Feb 01 15:22:13 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:22:13 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:22:13 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:22:13 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:22:13 compute-0 sudo[250480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:22:13 compute-0 sudo[250480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:22:13 compute-0 sudo[250480]: pam_unix(sudo:session): session closed for user root
Feb 01 15:22:13 compute-0 ceph-mon[75179]: pgmap v1122: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Feb 01 15:22:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:22:13 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:22:13 compute-0 sudo[250505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 15:22:13 compute-0 sudo[250505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:22:14 compute-0 sudo[250505]: pam_unix(sudo:session): session closed for user root
Feb 01 15:22:14 compute-0 sudo[250562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:22:14 compute-0 sudo[250562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:22:14 compute-0 sudo[250562]: pam_unix(sudo:session): session closed for user root
Feb 01 15:22:14 compute-0 sudo[250587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- inventory --format=json-pretty --filter-for-batch
Feb 01 15:22:14 compute-0 sudo[250587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:22:14 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 1 op/s
Feb 01 15:22:14 compute-0 podman[250624]: 2026-02-01 15:22:14.84048969 +0000 UTC m=+0.062419071 container create dc91141d68e16dfc3dc31431923ebfcf14789b05a0bbb34378c1e58129dc0aca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 01 15:22:14 compute-0 systemd[1]: Started libpod-conmon-dc91141d68e16dfc3dc31431923ebfcf14789b05a0bbb34378c1e58129dc0aca.scope.
Feb 01 15:22:14 compute-0 podman[250624]: 2026-02-01 15:22:14.81017818 +0000 UTC m=+0.032107661 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:22:14 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:22:14 compute-0 podman[250624]: 2026-02-01 15:22:14.923766232 +0000 UTC m=+0.145695643 container init dc91141d68e16dfc3dc31431923ebfcf14789b05a0bbb34378c1e58129dc0aca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:22:14 compute-0 podman[250624]: 2026-02-01 15:22:14.930201623 +0000 UTC m=+0.152130994 container start dc91141d68e16dfc3dc31431923ebfcf14789b05a0bbb34378c1e58129dc0aca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 01 15:22:14 compute-0 podman[250624]: 2026-02-01 15:22:14.933339921 +0000 UTC m=+0.155269352 container attach dc91141d68e16dfc3dc31431923ebfcf14789b05a0bbb34378c1e58129dc0aca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 01 15:22:14 compute-0 strange_galileo[250640]: 167 167
Feb 01 15:22:14 compute-0 systemd[1]: libpod-dc91141d68e16dfc3dc31431923ebfcf14789b05a0bbb34378c1e58129dc0aca.scope: Deactivated successfully.
Feb 01 15:22:14 compute-0 podman[250624]: 2026-02-01 15:22:14.935715117 +0000 UTC m=+0.157644488 container died dc91141d68e16dfc3dc31431923ebfcf14789b05a0bbb34378c1e58129dc0aca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Feb 01 15:22:14 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e03e65cf-03e2-407f-9515-a854a7393b45", "snap_name": "c6222879-ed29-4cfb-9aea-5793593bdf51_b24e846b-f29d-418f-a067-565f2a42532d", "force": true, "format": "json"}]: dispatch
Feb 01 15:22:14 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e03e65cf-03e2-407f-9515-a854a7393b45", "snap_name": "c6222879-ed29-4cfb-9aea-5793593bdf51", "force": true, "format": "json"}]: dispatch
Feb 01 15:22:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e942142c467b6a05f0aa769147f68a15571c6eeb92742dd9b4ecac61543976a-merged.mount: Deactivated successfully.
Feb 01 15:22:14 compute-0 podman[250624]: 2026-02-01 15:22:14.971698166 +0000 UTC m=+0.193627537 container remove dc91141d68e16dfc3dc31431923ebfcf14789b05a0bbb34378c1e58129dc0aca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 01 15:22:14 compute-0 systemd[1]: libpod-conmon-dc91141d68e16dfc3dc31431923ebfcf14789b05a0bbb34378c1e58129dc0aca.scope: Deactivated successfully.
Feb 01 15:22:15 compute-0 podman[250663]: 2026-02-01 15:22:15.097587664 +0000 UTC m=+0.036747701 container create 2fe7f2b205b3bc994b5be6329126a682eee9030b6e5efa1e024e7d2c88eb04a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lumiere, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 01 15:22:15 compute-0 systemd[1]: Started libpod-conmon-2fe7f2b205b3bc994b5be6329126a682eee9030b6e5efa1e024e7d2c88eb04a4.scope.
Feb 01 15:22:15 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013956c7d7417baedb6e51fbc755bd72bb36afb286658d5e2af1d57a31289f47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013956c7d7417baedb6e51fbc755bd72bb36afb286658d5e2af1d57a31289f47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013956c7d7417baedb6e51fbc755bd72bb36afb286658d5e2af1d57a31289f47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013956c7d7417baedb6e51fbc755bd72bb36afb286658d5e2af1d57a31289f47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:22:15 compute-0 podman[250663]: 2026-02-01 15:22:15.082800959 +0000 UTC m=+0.021960996 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:22:15 compute-0 podman[250663]: 2026-02-01 15:22:15.181766033 +0000 UTC m=+0.120926060 container init 2fe7f2b205b3bc994b5be6329126a682eee9030b6e5efa1e024e7d2c88eb04a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lumiere, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:22:15 compute-0 podman[250663]: 2026-02-01 15:22:15.188580464 +0000 UTC m=+0.127740491 container start 2fe7f2b205b3bc994b5be6329126a682eee9030b6e5efa1e024e7d2c88eb04a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lumiere, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 01 15:22:15 compute-0 podman[250663]: 2026-02-01 15:22:15.191743262 +0000 UTC m=+0.130903279 container attach 2fe7f2b205b3bc994b5be6329126a682eee9030b6e5efa1e024e7d2c88eb04a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]: [
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:     {
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:         "available": false,
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:         "being_replaced": false,
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:         "ceph_device_lvm": false,
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:         "device_id": "QEMU_DVD-ROM_QM00001",
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:         "lsm_data": {},
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:         "lvs": [],
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:         "path": "/dev/sr0",
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:         "rejected_reasons": [
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "Has a FileSystem",
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "Insufficient space (<5GB)"
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:         ],
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:         "sys_api": {
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "actuators": null,
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "device_nodes": [
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:                 "sr0"
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             ],
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "devname": "sr0",
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "human_readable_size": "482.00 KB",
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "id_bus": "ata",
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "model": "QEMU DVD-ROM",
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "nr_requests": "2",
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "parent": "/dev/sr0",
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "partitions": {},
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "path": "/dev/sr0",
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "removable": "1",
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "rev": "2.5+",
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "ro": "0",
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "rotational": "1",
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "sas_address": "",
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "sas_device_handle": "",
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "scheduler_mode": "mq-deadline",
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "sectors": 0,
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "sectorsize": "2048",
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "size": 493568.0,
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "support_discard": "2048",
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "type": "disk",
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:             "vendor": "QEMU"
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:         }
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]:     }
Feb 01 15:22:15 compute-0 peaceful_lumiere[250680]: ]
Feb 01 15:22:15 compute-0 systemd[1]: libpod-2fe7f2b205b3bc994b5be6329126a682eee9030b6e5efa1e024e7d2c88eb04a4.scope: Deactivated successfully.
Feb 01 15:22:15 compute-0 podman[250663]: 2026-02-01 15:22:15.700724056 +0000 UTC m=+0.639884103 container died 2fe7f2b205b3bc994b5be6329126a682eee9030b6e5efa1e024e7d2c88eb04a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:22:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-013956c7d7417baedb6e51fbc755bd72bb36afb286658d5e2af1d57a31289f47-merged.mount: Deactivated successfully.
Feb 01 15:22:15 compute-0 podman[250663]: 2026-02-01 15:22:15.746550131 +0000 UTC m=+0.685710178 container remove 2fe7f2b205b3bc994b5be6329126a682eee9030b6e5efa1e024e7d2c88eb04a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lumiere, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:22:15 compute-0 systemd[1]: libpod-conmon-2fe7f2b205b3bc994b5be6329126a682eee9030b6e5efa1e024e7d2c88eb04a4.scope: Deactivated successfully.
Feb 01 15:22:15 compute-0 sudo[250587]: pam_unix(sudo:session): session closed for user root
Feb 01 15:22:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:22:15 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:22:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:22:15 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:22:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:22:15 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:22:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 15:22:15 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:22:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 15:22:15 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:22:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 15:22:15 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:22:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 15:22:15 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:22:15 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:22:15 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:22:15 compute-0 sudo[251490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:22:15 compute-0 sudo[251490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:22:15 compute-0 sudo[251490]: pam_unix(sudo:session): session closed for user root
Feb 01 15:22:15 compute-0 sudo[251515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 15:22:15 compute-0 sudo[251515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:22:15 compute-0 ceph-mon[75179]: pgmap v1123: 305 pgs: 305 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 1 op/s
Feb 01 15:22:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:22:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:22:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:22:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:22:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:22:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:22:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:22:15 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:22:16 compute-0 podman[251552]: 2026-02-01 15:22:16.241819701 +0000 UTC m=+0.055537068 container create 217868cd8d0aac2c16bef050e11c65ded25534cdf149f33203f9874e86df4158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:22:16 compute-0 systemd[1]: Started libpod-conmon-217868cd8d0aac2c16bef050e11c65ded25534cdf149f33203f9874e86df4158.scope.
Feb 01 15:22:16 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:22:16 compute-0 podman[251552]: 2026-02-01 15:22:16.305192917 +0000 UTC m=+0.118910264 container init 217868cd8d0aac2c16bef050e11c65ded25534cdf149f33203f9874e86df4158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_germain, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 01 15:22:16 compute-0 podman[251552]: 2026-02-01 15:22:16.312850131 +0000 UTC m=+0.126567498 container start 217868cd8d0aac2c16bef050e11c65ded25534cdf149f33203f9874e86df4158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:22:16 compute-0 podman[251552]: 2026-02-01 15:22:16.221481981 +0000 UTC m=+0.035199398 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:22:16 compute-0 goofy_germain[251567]: 167 167
Feb 01 15:22:16 compute-0 systemd[1]: libpod-217868cd8d0aac2c16bef050e11c65ded25534cdf149f33203f9874e86df4158.scope: Deactivated successfully.
Feb 01 15:22:16 compute-0 podman[251552]: 2026-02-01 15:22:16.31675148 +0000 UTC m=+0.130468857 container attach 217868cd8d0aac2c16bef050e11c65ded25534cdf149f33203f9874e86df4158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_germain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Feb 01 15:22:16 compute-0 podman[251552]: 2026-02-01 15:22:16.317415569 +0000 UTC m=+0.131132936 container died 217868cd8d0aac2c16bef050e11c65ded25534cdf149f33203f9874e86df4158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_germain, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 01 15:22:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf7d3188bcba2e44c2d8c9312bacac6826518d4e85ddc203e753e6469cda3a3c-merged.mount: Deactivated successfully.
Feb 01 15:22:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:22:16 compute-0 podman[251552]: 2026-02-01 15:22:16.356963587 +0000 UTC m=+0.170680934 container remove 217868cd8d0aac2c16bef050e11c65ded25534cdf149f33203f9874e86df4158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_germain, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 01 15:22:16 compute-0 systemd[1]: libpod-conmon-217868cd8d0aac2c16bef050e11c65ded25534cdf149f33203f9874e86df4158.scope: Deactivated successfully.
Feb 01 15:22:16 compute-0 podman[251592]: 2026-02-01 15:22:16.525962154 +0000 UTC m=+0.052453721 container create 8260a21daaf55f7095a5a22e77004161186bd9cdc924e1977d9e32688e07ba70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 01 15:22:16 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s wr, 2 op/s
Feb 01 15:22:16 compute-0 systemd[1]: Started libpod-conmon-8260a21daaf55f7095a5a22e77004161186bd9cdc924e1977d9e32688e07ba70.scope.
Feb 01 15:22:16 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a4ba7ede35dd4241b49e8996f2fe1b56eab8fe5198b68893f4cac714937d33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a4ba7ede35dd4241b49e8996f2fe1b56eab8fe5198b68893f4cac714937d33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a4ba7ede35dd4241b49e8996f2fe1b56eab8fe5198b68893f4cac714937d33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a4ba7ede35dd4241b49e8996f2fe1b56eab8fe5198b68893f4cac714937d33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a4ba7ede35dd4241b49e8996f2fe1b56eab8fe5198b68893f4cac714937d33/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 15:22:16 compute-0 podman[251592]: 2026-02-01 15:22:16.595573454 +0000 UTC m=+0.122065071 container init 8260a21daaf55f7095a5a22e77004161186bd9cdc924e1977d9e32688e07ba70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_moser, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Feb 01 15:22:16 compute-0 podman[251592]: 2026-02-01 15:22:16.506079206 +0000 UTC m=+0.032570863 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:22:16 compute-0 podman[251592]: 2026-02-01 15:22:16.607513829 +0000 UTC m=+0.134005426 container start 8260a21daaf55f7095a5a22e77004161186bd9cdc924e1977d9e32688e07ba70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 01 15:22:16 compute-0 podman[251592]: 2026-02-01 15:22:16.611145121 +0000 UTC m=+0.137636718 container attach 8260a21daaf55f7095a5a22e77004161186bd9cdc924e1977d9e32688e07ba70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:22:16 compute-0 reverent_moser[251609]: --> passed data devices: 0 physical, 3 LVM
Feb 01 15:22:16 compute-0 reverent_moser[251609]: --> All data devices are unavailable
Feb 01 15:22:17 compute-0 systemd[1]: libpod-8260a21daaf55f7095a5a22e77004161186bd9cdc924e1977d9e32688e07ba70.scope: Deactivated successfully.
Feb 01 15:22:17 compute-0 conmon[251609]: conmon 8260a21daaf55f7095a5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8260a21daaf55f7095a5a22e77004161186bd9cdc924e1977d9e32688e07ba70.scope/container/memory.events
Feb 01 15:22:17 compute-0 podman[251592]: 2026-02-01 15:22:17.010942715 +0000 UTC m=+0.537434302 container died 8260a21daaf55f7095a5a22e77004161186bd9cdc924e1977d9e32688e07ba70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_moser, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:22:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-23a4ba7ede35dd4241b49e8996f2fe1b56eab8fe5198b68893f4cac714937d33-merged.mount: Deactivated successfully.
Feb 01 15:22:17 compute-0 podman[251592]: 2026-02-01 15:22:17.05893984 +0000 UTC m=+0.585431427 container remove 8260a21daaf55f7095a5a22e77004161186bd9cdc924e1977d9e32688e07ba70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:22:17 compute-0 systemd[1]: libpod-conmon-8260a21daaf55f7095a5a22e77004161186bd9cdc924e1977d9e32688e07ba70.scope: Deactivated successfully.
Feb 01 15:22:17 compute-0 sudo[251515]: pam_unix(sudo:session): session closed for user root
Feb 01 15:22:17 compute-0 sudo[251641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:22:17 compute-0 sudo[251641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:22:17 compute-0 sudo[251641]: pam_unix(sudo:session): session closed for user root
Feb 01 15:22:17 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e03e65cf-03e2-407f-9515-a854a7393b45", "format": "json"}]: dispatch
Feb 01 15:22:17 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e03e65cf-03e2-407f-9515-a854a7393b45, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:22:17 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e03e65cf-03e2-407f-9515-a854a7393b45, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Feb 01 15:22:17 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:22:17.184+0000 7f8267782640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e03e65cf-03e2-407f-9515-a854a7393b45' of type subvolume
Feb 01 15:22:17 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e03e65cf-03e2-407f-9515-a854a7393b45' of type subvolume
Feb 01 15:22:17 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e03e65cf-03e2-407f-9515-a854a7393b45", "force": true, "format": "json"}]: dispatch
Feb 01 15:22:17 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb 01 15:22:17 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e03e65cf-03e2-407f-9515-a854a7393b45'' moved to trashcan
Feb 01 15:22:17 compute-0 ceph-mgr[75469]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 01 15:22:17 compute-0 ceph-mgr[75469]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e03e65cf-03e2-407f-9515-a854a7393b45, vol_name:cephfs) < ""
Feb 01 15:22:17 compute-0 sudo[251666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 15:22:17 compute-0 sudo[251666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:22:17 compute-0 podman[251703]: 2026-02-01 15:22:17.521960806 +0000 UTC m=+0.054203400 container create 5a1251a1d0ca98a5e2ce32d25ea192a78f6d1102c7a9775d175cccf1dc013b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 01 15:22:17 compute-0 systemd[1]: Started libpod-conmon-5a1251a1d0ca98a5e2ce32d25ea192a78f6d1102c7a9775d175cccf1dc013b68.scope.
Feb 01 15:22:17 compute-0 podman[251703]: 2026-02-01 15:22:17.494738093 +0000 UTC m=+0.026980757 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:22:17 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:22:17 compute-0 podman[251703]: 2026-02-01 15:22:17.604014096 +0000 UTC m=+0.136256700 container init 5a1251a1d0ca98a5e2ce32d25ea192a78f6d1102c7a9775d175cccf1dc013b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 01 15:22:17 compute-0 podman[251703]: 2026-02-01 15:22:17.612870114 +0000 UTC m=+0.145112688 container start 5a1251a1d0ca98a5e2ce32d25ea192a78f6d1102c7a9775d175cccf1dc013b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:22:17 compute-0 optimistic_vaughan[251719]: 167 167
Feb 01 15:22:17 compute-0 systemd[1]: libpod-5a1251a1d0ca98a5e2ce32d25ea192a78f6d1102c7a9775d175cccf1dc013b68.scope: Deactivated successfully.
Feb 01 15:22:17 compute-0 podman[251703]: 2026-02-01 15:22:17.617810372 +0000 UTC m=+0.150052946 container attach 5a1251a1d0ca98a5e2ce32d25ea192a78f6d1102c7a9775d175cccf1dc013b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_vaughan, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 01 15:22:17 compute-0 podman[251703]: 2026-02-01 15:22:17.618559533 +0000 UTC m=+0.150802127 container died 5a1251a1d0ca98a5e2ce32d25ea192a78f6d1102c7a9775d175cccf1dc013b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_vaughan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 01 15:22:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1e86ef6352f8d75c5a2bae55400e15a2c4a9ba87b8d28d35c7650dc9f7253aa-merged.mount: Deactivated successfully.
Feb 01 15:22:17 compute-0 podman[251703]: 2026-02-01 15:22:17.664076709 +0000 UTC m=+0.196319303 container remove 5a1251a1d0ca98a5e2ce32d25ea192a78f6d1102c7a9775d175cccf1dc013b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_vaughan, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 01 15:22:17 compute-0 systemd[1]: libpod-conmon-5a1251a1d0ca98a5e2ce32d25ea192a78f6d1102c7a9775d175cccf1dc013b68.scope: Deactivated successfully.
Feb 01 15:22:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:22:17
Feb 01 15:22:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:22:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:22:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', '.mgr', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms', 'images', 'backups', 'default.rgw.log']
Feb 01 15:22:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:22:17 compute-0 podman[251743]: 2026-02-01 15:22:17.854383282 +0000 UTC m=+0.059235721 container create d435057052ae08e3f9c186a56de17305c61941cd9ef70475cb58fc74a1160833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_feistel, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 01 15:22:17 compute-0 systemd[1]: Started libpod-conmon-d435057052ae08e3f9c186a56de17305c61941cd9ef70475cb58fc74a1160833.scope.
Feb 01 15:22:17 compute-0 podman[251743]: 2026-02-01 15:22:17.833081635 +0000 UTC m=+0.037934074 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:22:17 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52acf193c2a9c1b13a03f2307cad12e42025f51571f2562b72d15ae3d8de20ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52acf193c2a9c1b13a03f2307cad12e42025f51571f2562b72d15ae3d8de20ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52acf193c2a9c1b13a03f2307cad12e42025f51571f2562b72d15ae3d8de20ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52acf193c2a9c1b13a03f2307cad12e42025f51571f2562b72d15ae3d8de20ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:22:17 compute-0 podman[251743]: 2026-02-01 15:22:17.946109503 +0000 UTC m=+0.150961942 container init d435057052ae08e3f9c186a56de17305c61941cd9ef70475cb58fc74a1160833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_feistel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 01 15:22:17 compute-0 ceph-mon[75179]: pgmap v1124: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s wr, 2 op/s
Feb 01 15:22:17 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e03e65cf-03e2-407f-9515-a854a7393b45", "format": "json"}]: dispatch
Feb 01 15:22:17 compute-0 ceph-mon[75179]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e03e65cf-03e2-407f-9515-a854a7393b45", "force": true, "format": "json"}]: dispatch
Feb 01 15:22:17 compute-0 podman[251743]: 2026-02-01 15:22:17.951999688 +0000 UTC m=+0.156852127 container start d435057052ae08e3f9c186a56de17305c61941cd9ef70475cb58fc74a1160833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 01 15:22:17 compute-0 podman[251743]: 2026-02-01 15:22:17.955127936 +0000 UTC m=+0.159980355 container attach d435057052ae08e3f9c186a56de17305c61941cd9ef70475cb58fc74a1160833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_feistel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 01 15:22:18 compute-0 reverent_feistel[251759]: {
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:     "0": [
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:         {
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "devices": [
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "/dev/loop3"
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             ],
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "lv_name": "ceph_lv0",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "lv_size": "21470642176",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "name": "ceph_lv0",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "tags": {
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.cluster_name": "ceph",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.crush_device_class": "",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.encrypted": "0",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.objectstore": "bluestore",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.osd_id": "0",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.type": "block",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.vdo": "0",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.with_tpm": "0"
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             },
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "type": "block",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "vg_name": "ceph_vg0"
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:         }
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:     ],
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:     "1": [
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:         {
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "devices": [
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "/dev/loop4"
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             ],
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "lv_name": "ceph_lv1",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "lv_size": "21470642176",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "name": "ceph_lv1",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "tags": {
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.cluster_name": "ceph",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.crush_device_class": "",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.encrypted": "0",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.objectstore": "bluestore",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.osd_id": "1",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.type": "block",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.vdo": "0",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.with_tpm": "0"
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             },
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "type": "block",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "vg_name": "ceph_vg1"
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:         }
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:     ],
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:     "2": [
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:         {
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "devices": [
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "/dev/loop5"
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             ],
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "lv_name": "ceph_lv2",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "lv_size": "21470642176",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "name": "ceph_lv2",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "tags": {
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.cluster_name": "ceph",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.crush_device_class": "",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.encrypted": "0",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.objectstore": "bluestore",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.osd_id": "2",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.type": "block",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.vdo": "0",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:                 "ceph.with_tpm": "0"
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             },
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "type": "block",
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:             "vg_name": "ceph_vg2"
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:         }
Feb 01 15:22:18 compute-0 reverent_feistel[251759]:     ]
Feb 01 15:22:18 compute-0 reverent_feistel[251759]: }
Feb 01 15:22:18 compute-0 systemd[1]: libpod-d435057052ae08e3f9c186a56de17305c61941cd9ef70475cb58fc74a1160833.scope: Deactivated successfully.
Feb 01 15:22:18 compute-0 podman[251743]: 2026-02-01 15:22:18.235634307 +0000 UTC m=+0.440486726 container died d435057052ae08e3f9c186a56de17305c61941cd9ef70475cb58fc74a1160833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_feistel, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 01 15:22:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-52acf193c2a9c1b13a03f2307cad12e42025f51571f2562b72d15ae3d8de20ba-merged.mount: Deactivated successfully.
Feb 01 15:22:18 compute-0 podman[251743]: 2026-02-01 15:22:18.279117745 +0000 UTC m=+0.483970164 container remove d435057052ae08e3f9c186a56de17305c61941cd9ef70475cb58fc74a1160833 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:22:18 compute-0 systemd[1]: libpod-conmon-d435057052ae08e3f9c186a56de17305c61941cd9ef70475cb58fc74a1160833.scope: Deactivated successfully.
Feb 01 15:22:18 compute-0 sudo[251666]: pam_unix(sudo:session): session closed for user root
Feb 01 15:22:18 compute-0 sudo[251780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:22:18 compute-0 sudo[251780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:22:18 compute-0 sudo[251780]: pam_unix(sudo:session): session closed for user root
Feb 01 15:22:18 compute-0 sudo[251805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 15:22:18 compute-0 sudo[251805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:22:18 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s wr, 2 op/s
Feb 01 15:22:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:22:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:22:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:22:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:22:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:22:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:22:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:22:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:22:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:22:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:22:18 compute-0 podman[251843]: 2026-02-01 15:22:18.698256011 +0000 UTC m=+0.039251771 container create 5e3b037449b31e1574494de9c660fbaf7a93af78cb75a6e203a5b824453d27c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gates, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:22:18 compute-0 systemd[1]: Started libpod-conmon-5e3b037449b31e1574494de9c660fbaf7a93af78cb75a6e203a5b824453d27c0.scope.
Feb 01 15:22:18 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:22:18 compute-0 podman[251843]: 2026-02-01 15:22:18.769575929 +0000 UTC m=+0.110571659 container init 5e3b037449b31e1574494de9c660fbaf7a93af78cb75a6e203a5b824453d27c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:22:18 compute-0 podman[251843]: 2026-02-01 15:22:18.676907022 +0000 UTC m=+0.017902842 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:22:18 compute-0 podman[251843]: 2026-02-01 15:22:18.777592554 +0000 UTC m=+0.118588324 container start 5e3b037449b31e1574494de9c660fbaf7a93af78cb75a6e203a5b824453d27c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gates, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:22:18 compute-0 youthful_gates[251859]: 167 167
Feb 01 15:22:18 compute-0 systemd[1]: libpod-5e3b037449b31e1574494de9c660fbaf7a93af78cb75a6e203a5b824453d27c0.scope: Deactivated successfully.
Feb 01 15:22:18 compute-0 podman[251843]: 2026-02-01 15:22:18.78172712 +0000 UTC m=+0.122722890 container attach 5e3b037449b31e1574494de9c660fbaf7a93af78cb75a6e203a5b824453d27c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gates, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:22:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:22:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:22:18 compute-0 podman[251843]: 2026-02-01 15:22:18.782287236 +0000 UTC m=+0.123283066 container died 5e3b037449b31e1574494de9c660fbaf7a93af78cb75a6e203a5b824453d27c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 01 15:22:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:22:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:22:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ef28785b4f0cf66e584d52427f32756b39661b55c21688c4e015df1172a3e42-merged.mount: Deactivated successfully.
Feb 01 15:22:18 compute-0 podman[251843]: 2026-02-01 15:22:18.814223981 +0000 UTC m=+0.155219711 container remove 5e3b037449b31e1574494de9c660fbaf7a93af78cb75a6e203a5b824453d27c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gates, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 01 15:22:18 compute-0 systemd[1]: libpod-conmon-5e3b037449b31e1574494de9c660fbaf7a93af78cb75a6e203a5b824453d27c0.scope: Deactivated successfully.
Feb 01 15:22:18 compute-0 podman[251885]: 2026-02-01 15:22:18.954985965 +0000 UTC m=+0.033650694 container create 7459c65853b6f07f150d7b7cb0baa55fc2ee49b84f3b26962a8b7edbac6b79e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hugle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:22:18 compute-0 systemd[1]: Started libpod-conmon-7459c65853b6f07f150d7b7cb0baa55fc2ee49b84f3b26962a8b7edbac6b79e4.scope.
Feb 01 15:22:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:22:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:22:19 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:22:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1274e04ee8fb34caad3bc936eecd7ec37570dc018c040f67573a2ca178ef283a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:22:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1274e04ee8fb34caad3bc936eecd7ec37570dc018c040f67573a2ca178ef283a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:22:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1274e04ee8fb34caad3bc936eecd7ec37570dc018c040f67573a2ca178ef283a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:22:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1274e04ee8fb34caad3bc936eecd7ec37570dc018c040f67573a2ca178ef283a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:22:19 compute-0 podman[251885]: 2026-02-01 15:22:19.03007407 +0000 UTC m=+0.108738799 container init 7459c65853b6f07f150d7b7cb0baa55fc2ee49b84f3b26962a8b7edbac6b79e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 01 15:22:19 compute-0 podman[251885]: 2026-02-01 15:22:18.939431929 +0000 UTC m=+0.018096708 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:22:19 compute-0 podman[251885]: 2026-02-01 15:22:19.037009774 +0000 UTC m=+0.115674503 container start 7459c65853b6f07f150d7b7cb0baa55fc2ee49b84f3b26962a8b7edbac6b79e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hugle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:22:19 compute-0 podman[251885]: 2026-02-01 15:22:19.04008862 +0000 UTC m=+0.118753369 container attach 7459c65853b6f07f150d7b7cb0baa55fc2ee49b84f3b26962a8b7edbac6b79e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hugle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:22:19 compute-0 lvm[251977]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:22:19 compute-0 lvm[251977]: VG ceph_vg0 finished
Feb 01 15:22:19 compute-0 lvm[251980]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:22:19 compute-0 lvm[251980]: VG ceph_vg1 finished
Feb 01 15:22:19 compute-0 lvm[251982]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:22:19 compute-0 lvm[251982]: VG ceph_vg2 finished
Feb 01 15:22:19 compute-0 hardcore_hugle[251901]: {}
Feb 01 15:22:19 compute-0 systemd[1]: libpod-7459c65853b6f07f150d7b7cb0baa55fc2ee49b84f3b26962a8b7edbac6b79e4.scope: Deactivated successfully.
Feb 01 15:22:19 compute-0 podman[251885]: 2026-02-01 15:22:19.779741089 +0000 UTC m=+0.858405818 container died 7459c65853b6f07f150d7b7cb0baa55fc2ee49b84f3b26962a8b7edbac6b79e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hugle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 01 15:22:19 compute-0 systemd[1]: libpod-7459c65853b6f07f150d7b7cb0baa55fc2ee49b84f3b26962a8b7edbac6b79e4.scope: Consumed 1.104s CPU time.
Feb 01 15:22:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-1274e04ee8fb34caad3bc936eecd7ec37570dc018c040f67573a2ca178ef283a-merged.mount: Deactivated successfully.
Feb 01 15:22:19 compute-0 podman[251885]: 2026-02-01 15:22:19.820561273 +0000 UTC m=+0.899226002 container remove 7459c65853b6f07f150d7b7cb0baa55fc2ee49b84f3b26962a8b7edbac6b79e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 01 15:22:19 compute-0 systemd[1]: libpod-conmon-7459c65853b6f07f150d7b7cb0baa55fc2ee49b84f3b26962a8b7edbac6b79e4.scope: Deactivated successfully.
Feb 01 15:22:19 compute-0 sudo[251805]: pam_unix(sudo:session): session closed for user root
Feb 01 15:22:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:22:19 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:22:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:22:19 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:22:19 compute-0 sudo[251998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 15:22:19 compute-0 sudo[251998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:22:19 compute-0 sudo[251998]: pam_unix(sudo:session): session closed for user root
Feb 01 15:22:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Feb 01 15:22:19 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Feb 01 15:22:19 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Feb 01 15:22:19 compute-0 ceph-mon[75179]: pgmap v1125: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s wr, 2 op/s
Feb 01 15:22:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:22:19 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:22:20 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 46 KiB/s wr, 4 op/s
Feb 01 15:22:20 compute-0 ceph-mon[75179]: osdmap e175: 3 total, 3 up, 3 in
Feb 01 15:22:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:22:21 compute-0 ceph-mon[75179]: pgmap v1127: 305 pgs: 305 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 46 KiB/s wr, 4 op/s
Feb 01 15:22:22 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:22:22.028 154901 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:64:0f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '76:0c:13:64:99:39'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 01 15:22:22 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:22:22.030 154901 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 01 15:22:22 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 61 KiB/s wr, 4 op/s
Feb 01 15:22:23 compute-0 ceph-mon[75179]: pgmap v1128: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 61 KiB/s wr, 4 op/s
Feb 01 15:22:24 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 61 KiB/s wr, 4 op/s
Feb 01 15:22:25 compute-0 ceph-mon[75179]: pgmap v1129: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 61 KiB/s wr, 4 op/s
Feb 01 15:22:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:22:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Feb 01 15:22:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Feb 01 15:22:26 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Feb 01 15:22:26 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 56 KiB/s wr, 3 op/s
Feb 01 15:22:27 compute-0 ceph-mon[75179]: osdmap e176: 3 total, 3 up, 3 in
Feb 01 15:22:27 compute-0 ceph-mon[75179]: pgmap v1131: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 56 KiB/s wr, 3 op/s
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659720693395622 of space, bias 1.0, pg target 0.19979162080186866 quantized to 32 (current 32)
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005843620122686109 of space, bias 4.0, pg target 0.701234414722333 quantized to 16 (current 16)
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:22:28 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 476 B/s rd, 52 KiB/s wr, 3 op/s
Feb 01 15:22:29 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:22:29.032 154901 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c3bd6005-873a-4620-bb39-624ed33e90e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 01 15:22:29 compute-0 ceph-mon[75179]: pgmap v1132: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 476 B/s rd, 52 KiB/s wr, 3 op/s
Feb 01 15:22:30 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s wr, 1 op/s
Feb 01 15:22:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:22:31 compute-0 ceph-mon[75179]: pgmap v1133: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s wr, 1 op/s
Feb 01 15:22:32 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 0 op/s
Feb 01 15:22:33 compute-0 ceph-mon[75179]: pgmap v1134: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 0 op/s
Feb 01 15:22:34 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 0 op/s
Feb 01 15:22:35 compute-0 ceph-mon[75179]: pgmap v1135: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 0 op/s
Feb 01 15:22:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:22:36 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:22:37 compute-0 ceph-mon[75179]: pgmap v1136: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:22:38 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:22:39 compute-0 podman[252023]: 2026-02-01 15:22:39.014080462 +0000 UTC m=+0.085645281 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Feb 01 15:22:39 compute-0 podman[252024]: 2026-02-01 15:22:39.042285192 +0000 UTC m=+0.108706207 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 01 15:22:39 compute-0 ceph-mon[75179]: pgmap v1137: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:22:40 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:22:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:22:41 compute-0 ceph-mon[75179]: pgmap v1138: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:22:42 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.671093) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959362671128, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1364, "num_deletes": 256, "total_data_size": 2299292, "memory_usage": 2344096, "flush_reason": "Manual Compaction"}
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959362682557, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 2233789, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25023, "largest_seqno": 26386, "table_properties": {"data_size": 2227249, "index_size": 3675, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14105, "raw_average_key_size": 20, "raw_value_size": 2213986, "raw_average_value_size": 3190, "num_data_blocks": 167, "num_entries": 694, "num_filter_entries": 694, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769959247, "oldest_key_time": 1769959247, "file_creation_time": 1769959362, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 11518 microseconds, and 6217 cpu microseconds.
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.682610) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 2233789 bytes OK
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.682634) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.684424) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.684445) EVENT_LOG_v1 {"time_micros": 1769959362684437, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.684468) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 2293100, prev total WAL file size 2293100, number of live WAL files 2.
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.685113) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(2181KB)], [56(9679KB)]
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959362685192, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12145178, "oldest_snapshot_seqno": -1}
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5717 keys, 10526799 bytes, temperature: kUnknown
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959362765130, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 10526799, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10484920, "index_size": 26473, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14341, "raw_key_size": 142619, "raw_average_key_size": 24, "raw_value_size": 10378801, "raw_average_value_size": 1815, "num_data_blocks": 1099, "num_entries": 5717, "num_filter_entries": 5717, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769957398, "oldest_key_time": 0, "file_creation_time": 1769959362, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "22ff331c-3ab9-4629-8bb9-0845546f6646", "db_session_id": "9H8HU9QM155BYJ6W9TB0", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.765446) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 10526799 bytes
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.766584) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.2 rd, 131.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 9.5 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(10.1) write-amplify(4.7) OK, records in: 6245, records dropped: 528 output_compression: NoCompression
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.766608) EVENT_LOG_v1 {"time_micros": 1769959362766596, "job": 30, "event": "compaction_finished", "compaction_time_micros": 79817, "compaction_time_cpu_micros": 33010, "output_level": 6, "num_output_files": 1, "total_output_size": 10526799, "num_input_records": 6245, "num_output_records": 5717, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959362766944, "job": 30, "event": "table_file_deletion", "file_number": 58}
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769959362768183, "job": 30, "event": "table_file_deletion", "file_number": 56}
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.685025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.768273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.768282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.768285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.768288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:22:42 compute-0 ceph-mon[75179]: rocksdb: (Original Log Time 2026/02/01-15:22:42.768292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 01 15:22:43 compute-0 ceph-mon[75179]: pgmap v1139: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:22:44 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:22:45 compute-0 ceph-mon[75179]: pgmap v1140: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:22:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:22:46 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:22:47 compute-0 ceph-mon[75179]: pgmap v1141: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:22:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:22:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:22:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:22:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:22:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:22:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:22:48 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:22:49 compute-0 ceph-mon[75179]: pgmap v1142: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:22:50 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:22:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 01 15:22:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2022608717' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:22:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 01 15:22:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2022608717' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:22:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:22:51 compute-0 ceph-mon[75179]: pgmap v1143: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:22:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/2022608717' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:22:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/2022608717' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:22:52 compute-0 nova_compute[238794]: 2026-02-01 15:22:52.145 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:22:52 compute-0 nova_compute[238794]: 2026-02-01 15:22:52.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:22:52 compute-0 nova_compute[238794]: 2026-02-01 15:22:52.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:22:52 compute-0 nova_compute[238794]: 2026-02-01 15:22:52.319 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 01 15:22:52 compute-0 nova_compute[238794]: 2026-02-01 15:22:52.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 01 15:22:52 compute-0 nova_compute[238794]: 2026-02-01 15:22:52.340 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 01 15:22:52 compute-0 nova_compute[238794]: 2026-02-01 15:22:52.340 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:22:52 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:22:53 compute-0 nova_compute[238794]: 2026-02-01 15:22:53.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:22:53 compute-0 nova_compute[238794]: 2026-02-01 15:22:53.321 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 01 15:22:53 compute-0 ceph-mon[75179]: pgmap v1144: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:22:54 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:22:55 compute-0 ceph-mon[75179]: pgmap v1145: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:22:56 compute-0 nova_compute[238794]: 2026-02-01 15:22:56.322 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:22:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:22:56 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:22:57 compute-0 nova_compute[238794]: 2026-02-01 15:22:57.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:22:57 compute-0 ceph-mon[75179]: pgmap v1146: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:22:58 compute-0 nova_compute[238794]: 2026-02-01 15:22:58.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:22:58 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:22:59 compute-0 ceph-mon[75179]: pgmap v1147: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:00 compute-0 nova_compute[238794]: 2026-02-01 15:23:00.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:23:00 compute-0 nova_compute[238794]: 2026-02-01 15:23:00.347 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:23:00 compute-0 nova_compute[238794]: 2026-02-01 15:23:00.347 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:23:00 compute-0 nova_compute[238794]: 2026-02-01 15:23:00.347 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:23:00 compute-0 nova_compute[238794]: 2026-02-01 15:23:00.348 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 01 15:23:00 compute-0 nova_compute[238794]: 2026-02-01 15:23:00.348 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:23:00 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:23:00 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1209366518' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:23:00 compute-0 nova_compute[238794]: 2026-02-01 15:23:00.846 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:23:01 compute-0 nova_compute[238794]: 2026-02-01 15:23:01.026 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 01 15:23:01 compute-0 nova_compute[238794]: 2026-02-01 15:23:01.028 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5027MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 01 15:23:01 compute-0 nova_compute[238794]: 2026-02-01 15:23:01.028 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:23:01 compute-0 nova_compute[238794]: 2026-02-01 15:23:01.029 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:23:01 compute-0 nova_compute[238794]: 2026-02-01 15:23:01.118 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 01 15:23:01 compute-0 nova_compute[238794]: 2026-02-01 15:23:01.118 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 01 15:23:01 compute-0 nova_compute[238794]: 2026-02-01 15:23:01.164 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:23:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:23:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:23:01 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1450958973' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:23:01 compute-0 nova_compute[238794]: 2026-02-01 15:23:01.692 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:23:01 compute-0 nova_compute[238794]: 2026-02-01 15:23:01.698 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 01 15:23:01 compute-0 nova_compute[238794]: 2026-02-01 15:23:01.716 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 01 15:23:01 compute-0 nova_compute[238794]: 2026-02-01 15:23:01.719 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 01 15:23:01 compute-0 nova_compute[238794]: 2026-02-01 15:23:01.720 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:23:01 compute-0 ceph-mon[75179]: pgmap v1148: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:01 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1209366518' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:23:01 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1450958973' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:23:02 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:03 compute-0 nova_compute[238794]: 2026-02-01 15:23:03.722 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:23:03 compute-0 ceph-mon[75179]: pgmap v1149: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:04 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:05 compute-0 ceph-mon[75179]: pgmap v1150: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:23:06 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:07 compute-0 ceph-mon[75179]: pgmap v1151: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:23:07.820 154901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:23:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:23:07.821 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:23:07 compute-0 ovn_metadata_agent[154890]: 2026-02-01 15:23:07.821 154901 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:23:08 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:09 compute-0 ceph-mon[75179]: pgmap v1152: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:10 compute-0 podman[252113]: 2026-02-01 15:23:10.004534788 +0000 UTC m=+0.075885028 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 01 15:23:10 compute-0 podman[252114]: 2026-02-01 15:23:10.090389294 +0000 UTC m=+0.160280763 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 01 15:23:10 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:11 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:23:11 compute-0 ceph-mon[75179]: pgmap v1153: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:12 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:13 compute-0 ceph-mon[75179]: pgmap v1154: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:14 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:15 compute-0 ceph-mon[75179]: pgmap v1155: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:16 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:23:16 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:17 compute-0 ceph-mon[75179]: pgmap v1156: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Optimize plan auto_2026-02-01_15:23:17
Feb 01 15:23:17 compute-0 ceph-mgr[75469]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 01 15:23:17 compute-0 ceph-mgr[75469]: [balancer INFO root] do_upmap
Feb 01 15:23:17 compute-0 ceph-mgr[75469]: [balancer INFO root] pools ['volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.log', 'images']
Feb 01 15:23:17 compute-0 ceph-mgr[75469]: [balancer INFO root] prepared 0/10 upmap changes
Feb 01 15:23:18 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 01 15:23:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:23:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 01 15:23:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 01 15:23:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:23:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 01 15:23:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:23:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 01 15:23:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:23:18 compute-0 ceph-mgr[75469]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 01 15:23:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:23:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f82990cd130>)]
Feb 01 15:23:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb 01 15:23:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:23:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:23:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:23:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f825bee78b0>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f825be85340>)]
Feb 01 15:23:18 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb 01 15:23:19 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb 01 15:23:19 compute-0 ceph-mon[75179]: pgmap v1157: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:20 compute-0 sudo[252158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:23:20 compute-0 sudo[252158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:23:20 compute-0 sudo[252158]: pam_unix(sudo:session): session closed for user root
Feb 01 15:23:20 compute-0 sudo[252183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 01 15:23:20 compute-0 sudo[252183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:23:20 compute-0 sudo[252183]: pam_unix(sudo:session): session closed for user root
Feb 01 15:23:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 01 15:23:20 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 01 15:23:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:23:20 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:23:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 01 15:23:20 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:23:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 01 15:23:20 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:23:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 01 15:23:20 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:23:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 01 15:23:20 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:23:20 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:23:20 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:23:20 compute-0 sudo[252239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:23:20 compute-0 sudo[252239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:23:20 compute-0 sudo[252239]: pam_unix(sudo:session): session closed for user root
Feb 01 15:23:20 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 85 B/s wr, 0 op/s
Feb 01 15:23:20 compute-0 sudo[252264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 01 15:23:20 compute-0 sudo[252264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:23:20 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 01 15:23:20 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:23:20 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 01 15:23:20 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:23:20 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 01 15:23:20 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 01 15:23:20 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:23:20 compute-0 podman[252302]: 2026-02-01 15:23:20.843582886 +0000 UTC m=+0.058220802 container create eb94583fb4f0360146590575e2fa9a92b25d9ef7cd47ac10e66546630edbe876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Feb 01 15:23:20 compute-0 systemd[1]: Started libpod-conmon-eb94583fb4f0360146590575e2fa9a92b25d9ef7cd47ac10e66546630edbe876.scope.
Feb 01 15:23:20 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:23:20 compute-0 podman[252302]: 2026-02-01 15:23:20.807781773 +0000 UTC m=+0.022419749 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:23:20 compute-0 podman[252302]: 2026-02-01 15:23:20.911015016 +0000 UTC m=+0.125652902 container init eb94583fb4f0360146590575e2fa9a92b25d9ef7cd47ac10e66546630edbe876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_chatelet, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:23:20 compute-0 podman[252302]: 2026-02-01 15:23:20.919108153 +0000 UTC m=+0.133746039 container start eb94583fb4f0360146590575e2fa9a92b25d9ef7cd47ac10e66546630edbe876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_chatelet, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:23:20 compute-0 podman[252302]: 2026-02-01 15:23:20.922056595 +0000 UTC m=+0.136694511 container attach eb94583fb4f0360146590575e2fa9a92b25d9ef7cd47ac10e66546630edbe876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_chatelet, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:23:20 compute-0 serene_chatelet[252319]: 167 167
Feb 01 15:23:20 compute-0 systemd[1]: libpod-eb94583fb4f0360146590575e2fa9a92b25d9ef7cd47ac10e66546630edbe876.scope: Deactivated successfully.
Feb 01 15:23:20 compute-0 conmon[252319]: conmon eb94583fb4f036014659 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eb94583fb4f0360146590575e2fa9a92b25d9ef7cd47ac10e66546630edbe876.scope/container/memory.events
Feb 01 15:23:20 compute-0 podman[252302]: 2026-02-01 15:23:20.923984969 +0000 UTC m=+0.138622885 container died eb94583fb4f0360146590575e2fa9a92b25d9ef7cd47ac10e66546630edbe876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_chatelet, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Feb 01 15:23:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e153eb2dca1782aaf039009d7509ad4ce09ff96a616aac467cdb1af52f2173fe-merged.mount: Deactivated successfully.
Feb 01 15:23:20 compute-0 podman[252302]: 2026-02-01 15:23:20.960376249 +0000 UTC m=+0.175014135 container remove eb94583fb4f0360146590575e2fa9a92b25d9ef7cd47ac10e66546630edbe876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_chatelet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Feb 01 15:23:20 compute-0 systemd[1]: libpod-conmon-eb94583fb4f0360146590575e2fa9a92b25d9ef7cd47ac10e66546630edbe876.scope: Deactivated successfully.
Feb 01 15:23:21 compute-0 podman[252343]: 2026-02-01 15:23:21.083758627 +0000 UTC m=+0.030323021 container create 91e60e3628c220daa4128b7592afa3313dbe71bc313b81fdc2f1539e32697ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 01 15:23:21 compute-0 systemd[1]: Started libpod-conmon-91e60e3628c220daa4128b7592afa3313dbe71bc313b81fdc2f1539e32697ead.scope.
Feb 01 15:23:21 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:23:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc0c04c7e1015f5bfb6cc43b2f233a7d738f071a17c7ce2f3880e32cf6b2a21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:23:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc0c04c7e1015f5bfb6cc43b2f233a7d738f071a17c7ce2f3880e32cf6b2a21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:23:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc0c04c7e1015f5bfb6cc43b2f233a7d738f071a17c7ce2f3880e32cf6b2a21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:23:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc0c04c7e1015f5bfb6cc43b2f233a7d738f071a17c7ce2f3880e32cf6b2a21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:23:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc0c04c7e1015f5bfb6cc43b2f233a7d738f071a17c7ce2f3880e32cf6b2a21/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 01 15:23:21 compute-0 podman[252343]: 2026-02-01 15:23:21.070052133 +0000 UTC m=+0.016616547 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:23:21 compute-0 podman[252343]: 2026-02-01 15:23:21.179415178 +0000 UTC m=+0.125979652 container init 91e60e3628c220daa4128b7592afa3313dbe71bc313b81fdc2f1539e32697ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 01 15:23:21 compute-0 podman[252343]: 2026-02-01 15:23:21.188030729 +0000 UTC m=+0.134595163 container start 91e60e3628c220daa4128b7592afa3313dbe71bc313b81fdc2f1539e32697ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:23:21 compute-0 podman[252343]: 2026-02-01 15:23:21.19483021 +0000 UTC m=+0.141394634 container attach 91e60e3628c220daa4128b7592afa3313dbe71bc313b81fdc2f1539e32697ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:23:21 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:23:21 compute-0 serene_lumiere[252360]: --> passed data devices: 0 physical, 3 LVM
Feb 01 15:23:21 compute-0 serene_lumiere[252360]: --> All data devices are unavailable
Feb 01 15:23:21 compute-0 systemd[1]: libpod-91e60e3628c220daa4128b7592afa3313dbe71bc313b81fdc2f1539e32697ead.scope: Deactivated successfully.
Feb 01 15:23:21 compute-0 podman[252380]: 2026-02-01 15:23:21.774660099 +0000 UTC m=+0.025375342 container died 91e60e3628c220daa4128b7592afa3313dbe71bc313b81fdc2f1539e32697ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 01 15:23:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fc0c04c7e1015f5bfb6cc43b2f233a7d738f071a17c7ce2f3880e32cf6b2a21-merged.mount: Deactivated successfully.
Feb 01 15:23:21 compute-0 podman[252380]: 2026-02-01 15:23:21.806711508 +0000 UTC m=+0.057426721 container remove 91e60e3628c220daa4128b7592afa3313dbe71bc313b81fdc2f1539e32697ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lumiere, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:23:21 compute-0 systemd[1]: libpod-conmon-91e60e3628c220daa4128b7592afa3313dbe71bc313b81fdc2f1539e32697ead.scope: Deactivated successfully.
Feb 01 15:23:21 compute-0 ceph-mon[75179]: pgmap v1158: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 85 B/s wr, 0 op/s
Feb 01 15:23:21 compute-0 ceph-mon[75179]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.viosrg(active, since 33m)
Feb 01 15:23:21 compute-0 sudo[252264]: pam_unix(sudo:session): session closed for user root
Feb 01 15:23:21 compute-0 sudo[252395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:23:21 compute-0 sudo[252395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:23:21 compute-0 sudo[252395]: pam_unix(sudo:session): session closed for user root
Feb 01 15:23:21 compute-0 sudo[252420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- lvm list --format json
Feb 01 15:23:21 compute-0 sudo[252420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:23:22 compute-0 podman[252456]: 2026-02-01 15:23:22.213787856 +0000 UTC m=+0.031425222 container create 210b05647dad23d2ccb026d68c6ca005b4e16bc154970cad0e51e49d5fa7e51b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dhawan, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 01 15:23:22 compute-0 systemd[1]: Started libpod-conmon-210b05647dad23d2ccb026d68c6ca005b4e16bc154970cad0e51e49d5fa7e51b.scope.
Feb 01 15:23:22 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:23:22 compute-0 podman[252456]: 2026-02-01 15:23:22.280542237 +0000 UTC m=+0.098179643 container init 210b05647dad23d2ccb026d68c6ca005b4e16bc154970cad0e51e49d5fa7e51b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dhawan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:23:22 compute-0 podman[252456]: 2026-02-01 15:23:22.287136151 +0000 UTC m=+0.104773507 container start 210b05647dad23d2ccb026d68c6ca005b4e16bc154970cad0e51e49d5fa7e51b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 01 15:23:22 compute-0 podman[252456]: 2026-02-01 15:23:22.290064734 +0000 UTC m=+0.107702120 container attach 210b05647dad23d2ccb026d68c6ca005b4e16bc154970cad0e51e49d5fa7e51b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dhawan, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 01 15:23:22 compute-0 systemd[1]: libpod-210b05647dad23d2ccb026d68c6ca005b4e16bc154970cad0e51e49d5fa7e51b.scope: Deactivated successfully.
Feb 01 15:23:22 compute-0 laughing_dhawan[252472]: 167 167
Feb 01 15:23:22 compute-0 podman[252456]: 2026-02-01 15:23:22.291395871 +0000 UTC m=+0.109033277 container died 210b05647dad23d2ccb026d68c6ca005b4e16bc154970cad0e51e49d5fa7e51b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 01 15:23:22 compute-0 podman[252456]: 2026-02-01 15:23:22.20036358 +0000 UTC m=+0.018000966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:23:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-256f3ba90a51354c474f352424ecb085c5dc2ddac564003353c5af6214c64dad-merged.mount: Deactivated successfully.
Feb 01 15:23:22 compute-0 podman[252456]: 2026-02-01 15:23:22.329811817 +0000 UTC m=+0.147449183 container remove 210b05647dad23d2ccb026d68c6ca005b4e16bc154970cad0e51e49d5fa7e51b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dhawan, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 01 15:23:22 compute-0 systemd[1]: libpod-conmon-210b05647dad23d2ccb026d68c6ca005b4e16bc154970cad0e51e49d5fa7e51b.scope: Deactivated successfully.
Feb 01 15:23:22 compute-0 podman[252496]: 2026-02-01 15:23:22.489634216 +0000 UTC m=+0.064652933 container create ca9f58044879511646c22b0ea823f4c45c4d63ac2c53751e57d513dd36f70a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 01 15:23:22 compute-0 systemd[1]: Started libpod-conmon-ca9f58044879511646c22b0ea823f4c45c4d63ac2c53751e57d513dd36f70a03.scope.
Feb 01 15:23:22 compute-0 podman[252496]: 2026-02-01 15:23:22.461284942 +0000 UTC m=+0.036303709 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:23:22 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:23:22 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Feb 01 15:23:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde073e37bbdef63b21476137ab97ab6d383a8919925073b332702c0b830e334/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:23:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde073e37bbdef63b21476137ab97ab6d383a8919925073b332702c0b830e334/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:23:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde073e37bbdef63b21476137ab97ab6d383a8919925073b332702c0b830e334/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:23:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde073e37bbdef63b21476137ab97ab6d383a8919925073b332702c0b830e334/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:23:22 compute-0 podman[252496]: 2026-02-01 15:23:22.596239914 +0000 UTC m=+0.171258661 container init ca9f58044879511646c22b0ea823f4c45c4d63ac2c53751e57d513dd36f70a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_dubinsky, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 01 15:23:22 compute-0 podman[252496]: 2026-02-01 15:23:22.60787507 +0000 UTC m=+0.182893787 container start ca9f58044879511646c22b0ea823f4c45c4d63ac2c53751e57d513dd36f70a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 01 15:23:22 compute-0 podman[252496]: 2026-02-01 15:23:22.612060567 +0000 UTC m=+0.187079254 container attach ca9f58044879511646c22b0ea823f4c45c4d63ac2c53751e57d513dd36f70a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 01 15:23:22 compute-0 ceph-mon[75179]: mgrmap e20: compute-0.viosrg(active, since 33m)
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]: {
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:     "0": [
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:         {
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "devices": [
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "/dev/loop3"
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             ],
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "lv_name": "ceph_lv0",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "lv_size": "21470642176",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e67ca44a-7e61-43f9-bf2b-cf15de50303a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "lv_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "name": "ceph_lv0",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "tags": {
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.block_uuid": "qLB6n6-I9vT-bTBy-A4Lv-4I8Z-vuaf-ngRv7m",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.cluster_name": "ceph",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.crush_device_class": "",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.encrypted": "0",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.objectstore": "bluestore",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.osd_fsid": "e67ca44a-7e61-43f9-bf2b-cf15de50303a",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.osd_id": "0",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.type": "block",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.vdo": "0",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.with_tpm": "0"
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             },
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "type": "block",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "vg_name": "ceph_vg0"
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:         }
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:     ],
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:     "1": [
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:         {
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "devices": [
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "/dev/loop4"
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             ],
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "lv_name": "ceph_lv1",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "lv_size": "21470642176",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fd39fcf7-28de-4953-80ed-edf6e0aa6fd0,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "lv_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "name": "ceph_lv1",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "tags": {
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.block_uuid": "Yyc4xP-r8Pt-3EkH-BSBc-sJL0-iIKz-G4Hq21",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.cluster_name": "ceph",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.crush_device_class": "",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.encrypted": "0",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.objectstore": "bluestore",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.osd_fsid": "fd39fcf7-28de-4953-80ed-edf6e0aa6fd0",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.osd_id": "1",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.type": "block",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.vdo": "0",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.with_tpm": "0"
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             },
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "type": "block",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "vg_name": "ceph_vg1"
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:         }
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:     ],
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:     "2": [
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:         {
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "devices": [
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "/dev/loop5"
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             ],
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "lv_name": "ceph_lv2",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "lv_size": "21470642176",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=7fabf513-99fe-4b35-b072-3f0e487337b7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "lv_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "name": "ceph_lv2",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "tags": {
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.block_uuid": "GTJles-peUS-2cyI-b7p1-fggU-jN37-gvLUdP",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.cephx_lockbox_secret": "",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.cluster_fsid": "2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.cluster_name": "ceph",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.crush_device_class": "",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.encrypted": "0",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.objectstore": "bluestore",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.osd_fsid": "7fabf513-99fe-4b35-b072-3f0e487337b7",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.osd_id": "2",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.type": "block",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.vdo": "0",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:                 "ceph.with_tpm": "0"
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             },
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "type": "block",
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:             "vg_name": "ceph_vg2"
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:         }
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]:     ]
Feb 01 15:23:22 compute-0 festive_dubinsky[252513]: }
Feb 01 15:23:22 compute-0 systemd[1]: libpod-ca9f58044879511646c22b0ea823f4c45c4d63ac2c53751e57d513dd36f70a03.scope: Deactivated successfully.
Feb 01 15:23:22 compute-0 podman[252523]: 2026-02-01 15:23:22.920096859 +0000 UTC m=+0.024720923 container died ca9f58044879511646c22b0ea823f4c45c4d63ac2c53751e57d513dd36f70a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_dubinsky, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 01 15:23:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-cde073e37bbdef63b21476137ab97ab6d383a8919925073b332702c0b830e334-merged.mount: Deactivated successfully.
Feb 01 15:23:22 compute-0 podman[252523]: 2026-02-01 15:23:22.961988993 +0000 UTC m=+0.066613047 container remove ca9f58044879511646c22b0ea823f4c45c4d63ac2c53751e57d513dd36f70a03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:23:22 compute-0 systemd[1]: libpod-conmon-ca9f58044879511646c22b0ea823f4c45c4d63ac2c53751e57d513dd36f70a03.scope: Deactivated successfully.
Feb 01 15:23:23 compute-0 sudo[252420]: pam_unix(sudo:session): session closed for user root
Feb 01 15:23:23 compute-0 sudo[252538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 01 15:23:23 compute-0 sudo[252538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:23:23 compute-0 sudo[252538]: pam_unix(sudo:session): session closed for user root
Feb 01 15:23:23 compute-0 sudo[252563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f -- raw list --format json
Feb 01 15:23:23 compute-0 sudo[252563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:23:23 compute-0 podman[252600]: 2026-02-01 15:23:23.402265762 +0000 UTC m=+0.052432951 container create 70a8da219fbfdfb98e75c479b95ebbeda16dbaea1a194ef034da49cd6c70ef9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banzai, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 01 15:23:23 compute-0 systemd[1]: Started libpod-conmon-70a8da219fbfdfb98e75c479b95ebbeda16dbaea1a194ef034da49cd6c70ef9c.scope.
Feb 01 15:23:23 compute-0 podman[252600]: 2026-02-01 15:23:23.377619361 +0000 UTC m=+0.027786600 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:23:23 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:23:23 compute-0 podman[252600]: 2026-02-01 15:23:23.487408968 +0000 UTC m=+0.137576147 container init 70a8da219fbfdfb98e75c479b95ebbeda16dbaea1a194ef034da49cd6c70ef9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 01 15:23:23 compute-0 podman[252600]: 2026-02-01 15:23:23.494840386 +0000 UTC m=+0.145007555 container start 70a8da219fbfdfb98e75c479b95ebbeda16dbaea1a194ef034da49cd6c70ef9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banzai, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 01 15:23:23 compute-0 reverent_banzai[252617]: 167 167
Feb 01 15:23:23 compute-0 podman[252600]: 2026-02-01 15:23:23.498744165 +0000 UTC m=+0.148911324 container attach 70a8da219fbfdfb98e75c479b95ebbeda16dbaea1a194ef034da49cd6c70ef9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banzai, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 01 15:23:23 compute-0 systemd[1]: libpod-70a8da219fbfdfb98e75c479b95ebbeda16dbaea1a194ef034da49cd6c70ef9c.scope: Deactivated successfully.
Feb 01 15:23:23 compute-0 podman[252600]: 2026-02-01 15:23:23.499730073 +0000 UTC m=+0.149897232 container died 70a8da219fbfdfb98e75c479b95ebbeda16dbaea1a194ef034da49cd6c70ef9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banzai, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 01 15:23:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fda374ed9df44ca5d42fc3f5de54c7ce3838a13102a99292312174acdafd9c7-merged.mount: Deactivated successfully.
Feb 01 15:23:23 compute-0 podman[252600]: 2026-02-01 15:23:23.532363798 +0000 UTC m=+0.182530947 container remove 70a8da219fbfdfb98e75c479b95ebbeda16dbaea1a194ef034da49cd6c70ef9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banzai, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 01 15:23:23 compute-0 systemd[1]: libpod-conmon-70a8da219fbfdfb98e75c479b95ebbeda16dbaea1a194ef034da49cd6c70ef9c.scope: Deactivated successfully.
Feb 01 15:23:23 compute-0 podman[252641]: 2026-02-01 15:23:23.65447065 +0000 UTC m=+0.034722315 container create bfe7a72b7f44332538b069d23f132f9d90b6d889b89541b1b39aad1265d7922c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 01 15:23:23 compute-0 systemd[1]: Started libpod-conmon-bfe7a72b7f44332538b069d23f132f9d90b6d889b89541b1b39aad1265d7922c.scope.
Feb 01 15:23:23 compute-0 systemd[1]: Started libcrun container.
Feb 01 15:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20b6073192ea1bfe87e5efbfaf015a64f10a8d0b2ab439d418afbc25d11366cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 01 15:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20b6073192ea1bfe87e5efbfaf015a64f10a8d0b2ab439d418afbc25d11366cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 01 15:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20b6073192ea1bfe87e5efbfaf015a64f10a8d0b2ab439d418afbc25d11366cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 01 15:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20b6073192ea1bfe87e5efbfaf015a64f10a8d0b2ab439d418afbc25d11366cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 01 15:23:23 compute-0 podman[252641]: 2026-02-01 15:23:23.63914646 +0000 UTC m=+0.019398145 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 01 15:23:23 compute-0 podman[252641]: 2026-02-01 15:23:23.735156891 +0000 UTC m=+0.115408606 container init bfe7a72b7f44332538b069d23f132f9d90b6d889b89541b1b39aad1265d7922c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 01 15:23:23 compute-0 podman[252641]: 2026-02-01 15:23:23.74334167 +0000 UTC m=+0.123593375 container start bfe7a72b7f44332538b069d23f132f9d90b6d889b89541b1b39aad1265d7922c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_colden, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 01 15:23:23 compute-0 podman[252641]: 2026-02-01 15:23:23.747523967 +0000 UTC m=+0.127775682 container attach bfe7a72b7f44332538b069d23f132f9d90b6d889b89541b1b39aad1265d7922c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_colden, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 01 15:23:23 compute-0 ceph-mon[75179]: pgmap v1159: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Feb 01 15:23:24 compute-0 lvm[252738]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:23:24 compute-0 lvm[252738]: VG ceph_vg1 finished
Feb 01 15:23:24 compute-0 lvm[252737]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:23:24 compute-0 lvm[252737]: VG ceph_vg0 finished
Feb 01 15:23:24 compute-0 lvm[252740]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:23:24 compute-0 lvm[252740]: VG ceph_vg2 finished
Feb 01 15:23:24 compute-0 happy_colden[252658]: {}
Feb 01 15:23:24 compute-0 systemd[1]: libpod-bfe7a72b7f44332538b069d23f132f9d90b6d889b89541b1b39aad1265d7922c.scope: Deactivated successfully.
Feb 01 15:23:24 compute-0 podman[252641]: 2026-02-01 15:23:24.532916358 +0000 UTC m=+0.913168043 container died bfe7a72b7f44332538b069d23f132f9d90b6d889b89541b1b39aad1265d7922c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_colden, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 01 15:23:24 compute-0 systemd[1]: libpod-bfe7a72b7f44332538b069d23f132f9d90b6d889b89541b1b39aad1265d7922c.scope: Consumed 1.177s CPU time.
Feb 01 15:23:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-20b6073192ea1bfe87e5efbfaf015a64f10a8d0b2ab439d418afbc25d11366cb-merged.mount: Deactivated successfully.
Feb 01 15:23:24 compute-0 podman[252641]: 2026-02-01 15:23:24.573748422 +0000 UTC m=+0.954000127 container remove bfe7a72b7f44332538b069d23f132f9d90b6d889b89541b1b39aad1265d7922c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 01 15:23:24 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Feb 01 15:23:24 compute-0 systemd[1]: libpod-conmon-bfe7a72b7f44332538b069d23f132f9d90b6d889b89541b1b39aad1265d7922c.scope: Deactivated successfully.
Feb 01 15:23:24 compute-0 sudo[252563]: pam_unix(sudo:session): session closed for user root
Feb 01 15:23:24 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 01 15:23:24 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:23:24 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 01 15:23:24 compute-0 ceph-mon[75179]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:23:24 compute-0 sudo[252755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 01 15:23:24 compute-0 sudo[252755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 01 15:23:24 compute-0 sudo[252755]: pam_unix(sudo:session): session closed for user root
Feb 01 15:23:25 compute-0 ceph-mon[75179]: pgmap v1160: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Feb 01 15:23:25 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:23:25 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' 
Feb 01 15:23:26 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:23:26 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Feb 01 15:23:27 compute-0 ceph-mon[75179]: pgmap v1161: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] _maybe_adjust
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659720693395622 of space, bias 1.0, pg target 0.19979162080186866 quantized to 32 (current 32)
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005843358214668303 of space, bias 4.0, pg target 0.7012029857601964 quantized to 16 (current 16)
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 01 15:23:28 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Feb 01 15:23:29 compute-0 ceph-mon[75179]: pgmap v1162: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Feb 01 15:23:30 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Feb 01 15:23:31 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:23:31 compute-0 ceph-mon[75179]: pgmap v1163: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Feb 01 15:23:32 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Feb 01 15:23:33 compute-0 sshd-session[252780]: Accepted publickey for zuul from 192.168.122.10 port 46268 ssh2: ECDSA SHA256:Ajlkfd72z2mf1Cx74MFHL8+YqNY/k8o2Fc/E5RUoJEE
Feb 01 15:23:33 compute-0 systemd-logind[786]: New session 51 of user zuul.
Feb 01 15:23:33 compute-0 systemd[1]: Started Session 51 of User zuul.
Feb 01 15:23:33 compute-0 sshd-session[252780]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 01 15:23:33 compute-0 sudo[252784]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Feb 01 15:23:33 compute-0 sudo[252784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 01 15:23:33 compute-0 ceph-mon[75179]: pgmap v1164: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Feb 01 15:23:34 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:35 compute-0 ceph-mon[75179]: pgmap v1165: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:35 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:36 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:23:36 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14504 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:36 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:36 compute-0 ceph-mon[75179]: from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:37 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Feb 01 15:23:37 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1008845376' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 01 15:23:37 compute-0 ceph-mon[75179]: from='client.14504 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:37 compute-0 ceph-mon[75179]: pgmap v1166: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:37 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1008845376' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 01 15:23:38 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:39 compute-0 ceph-mon[75179]: pgmap v1167: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:40 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:41 compute-0 podman[253081]: 2026-02-01 15:23:41.000936647 +0000 UTC m=+0.080271381 container health_status 1cfea894f9d2e73b8164212602393f85b4893879340d1051ead043b8e6051815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Feb 01 15:23:41 compute-0 podman[253082]: 2026-02-01 15:23:41.030388252 +0000 UTC m=+0.109890801 container health_status f81b4255baae5bc63483ecb4f427dc9212cab551edcbdc627c2ac937fbcd3f16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'dc7498bdc716e27101652478355c66564471fbf2f90816492e0de438d150496b-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d-e98f8aab2dc9e60a26f56c821eddd92954d37aa3278b9ce841194d01c3d73b4d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 01 15:23:41 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:23:41 compute-0 ceph-mon[75179]: pgmap v1168: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:42 compute-0 ovs-vsctl[253154]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Feb 01 15:23:42 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:43 compute-0 virtqemud[238696]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Feb 01 15:23:43 compute-0 virtqemud[238696]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Feb 01 15:23:43 compute-0 virtqemud[238696]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb 01 15:23:43 compute-0 ceph-mon[75179]: pgmap v1169: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:43 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: cache status {prefix=cache status} (starting...)
Feb 01 15:23:43 compute-0 lvm[253478]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 01 15:23:43 compute-0 lvm[253478]: VG ceph_vg2 finished
Feb 01 15:23:43 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: client ls {prefix=client ls} (starting...)
Feb 01 15:23:44 compute-0 lvm[253513]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 01 15:23:44 compute-0 lvm[253513]: VG ceph_vg0 finished
Feb 01 15:23:44 compute-0 lvm[253518]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 01 15:23:44 compute-0 lvm[253518]: VG ceph_vg1 finished
Feb 01 15:23:44 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14508 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:44 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: damage ls {prefix=damage ls} (starting...)
Feb 01 15:23:44 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: dump loads {prefix=dump loads} (starting...)
Feb 01 15:23:44 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:44 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14510 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:44 compute-0 ceph-mon[75179]: from='client.14508 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:44 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Feb 01 15:23:44 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Feb 01 15:23:45 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14514 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:45 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Feb 01 15:23:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Feb 01 15:23:45 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2063533233' entity='client.admin' cmd={"prefix": "report"} : dispatch
Feb 01 15:23:45 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Feb 01 15:23:45 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Feb 01 15:23:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 01 15:23:45 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1334816464' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:23:45 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14516 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:45 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 01 15:23:45 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:23:45.491+0000 7f8298063640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 01 15:23:45 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: get subtrees {prefix=get subtrees} (starting...)
Feb 01 15:23:45 compute-0 ceph-mon[75179]: pgmap v1170: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:45 compute-0 ceph-mon[75179]: from='client.14510 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:45 compute-0 ceph-mon[75179]: from='client.14514 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:45 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2063533233' entity='client.admin' cmd={"prefix": "report"} : dispatch
Feb 01 15:23:45 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1334816464' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 01 15:23:45 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: ops {prefix=ops} (starting...)
Feb 01 15:23:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Feb 01 15:23:45 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1467706821' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Feb 01 15:23:45 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Feb 01 15:23:45 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/596127942' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Feb 01 15:23:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Feb 01 15:23:46 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/592022745' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Feb 01 15:23:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:23:46 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: session ls {prefix=session ls} (starting...)
Feb 01 15:23:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb 01 15:23:46 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3229577900' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 01 15:23:46 compute-0 ceph-mds[95382]: mds.cephfs.compute-0.agpbju asok_command: status {prefix=status} (starting...)
Feb 01 15:23:46 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:46 compute-0 ceph-mon[75179]: from='client.14516 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:46 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1467706821' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Feb 01 15:23:46 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/596127942' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Feb 01 15:23:46 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/592022745' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Feb 01 15:23:46 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3229577900' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 01 15:23:46 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14528 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:46 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb 01 15:23:46 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2822312700' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 01 15:23:47 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14532 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb 01 15:23:47 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1834918764' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 01 15:23:47 compute-0 ceph-mon[75179]: pgmap v1171: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:47 compute-0 ceph-mon[75179]: from='client.14528 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:47 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2822312700' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 01 15:23:47 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1834918764' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 01 15:23:47 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Feb 01 15:23:47 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/762072287' entity='client.admin' cmd={"prefix": "features"} : dispatch
Feb 01 15:23:48 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 01 15:23:48 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2730721995' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 01 15:23:48 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:49 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:23:49 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:23:49 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:23:49 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:23:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb 01 15:23:49 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3012918075' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb 01 15:23:49 compute-0 ceph-mon[75179]: from='client.14532 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:49 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/762072287' entity='client.admin' cmd={"prefix": "features"} : dispatch
Feb 01 15:23:49 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2730721995' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 01 15:23:49 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] scanning for idle connections..
Feb 01 15:23:49 compute-0 ceph-mgr[75469]: [volumes INFO mgr_util] cleaning up connections: []
Feb 01 15:23:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Feb 01 15:23:49 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1738769593' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Feb 01 15:23:49 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb 01 15:23:49 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1066002277' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 01 15:23:49 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14546 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:49 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 01 15:23:49 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:23:49.802+0000 7f8298063640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 01 15:23:50 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14548 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:50 compute-0 ceph-mon[75179]: pgmap v1172: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:50 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3012918075' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb 01 15:23:50 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1738769593' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Feb 01 15:23:50 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1066002277' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 01 15:23:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Feb 01 15:23:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1874586709' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Feb 01 15:23:50 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:50 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14552 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Feb 01 15:23:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1002663905' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Feb 01 15:23:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 01 15:23:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3867469104' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:23:50 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 01 15:23:50 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3867469104' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:23:51 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14556 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.eusbkm", "name": "rgw_frontends"} v 0)
Feb 01 15:23:51 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.eusbkm", "name": "rgw_frontends"} : dispatch
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:53:48.180370+0000 osd.2 (osd.2) 53 : cluster [DBG] 10.0 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 53)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:53:48.166226+0000 osd.2 (osd.2) 52 : cluster [DBG] 10.0 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:53:48.180370+0000 osd.2 (osd.2) 53 : cluster [DBG] 10.0 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 1335296 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 712936 data_alloc: 218103808 data_used: 4907
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:19.616207+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 1327104 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:20.616574+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:53:50.131120+0000 osd.2 (osd.2) 54 : cluster [DBG] 2.1 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:53:50.141720+0000 osd.2 (osd.2) 55 : cluster [DBG] 2.1 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 55)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:53:50.131120+0000 osd.2 (osd.2) 54 : cluster [DBG] 2.1 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:53:50.141720+0000 osd.2 (osd.2) 55 : cluster [DBG] 2.1 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 1327104 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:21.616856+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19(unlocked)] enter Initial
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=0 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000132 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=0 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000040
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000013 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000306 1 0.000144
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000266 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000631 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 99 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 1310720 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:22.617175+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 99 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.001632 2 0.000372
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.002386 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.002429 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=99) [2] r=0 lpr=99 pi=[55,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000125 1 0.000182
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000010 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 100 heartbeat osd_stat(store_statfs(0x4fcef7000/0x0/0x4ffc00000, data 0x94d9b/0x133000, compress 0x0/0x0/0x0, omap 0xd31a, meta 0x2bc2ce6), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 1302528 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:23.617335+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:53:53.162396+0000 osd.2 (osd.2) 56 : cluster [DBG] 5.6 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:53:53.172870+0000 osd.2 (osd.2) 57 : cluster [DBG] 5.6 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 101 pg[9.19( v 57'487 lc 0'0 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=57'487 remapped NOTIFY m=9 mbc={}] exit Started/Stray 1.003290 6 0.000056
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 101 pg[9.19( v 57'487 lc 0'0 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=57'487 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 101 pg[9.19( v 57'487 lc 0'0 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=55/55 les/c/f=56/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 crt=57'487 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 101 pg[9.19( v 57'487 lc 38'60 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 pct=0'0 crt=57'487 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003966 3 0.000144
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 101 pg[9.19( v 57'487 lc 38'60 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 pct=0'0 crt=57'487 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 101 pg[9.19( v 57'487 lc 38'60 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 pct=0'0 crt=57'487 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000073 1 0.000068
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 101 pg[9.19( v 57'487 lc 38'60 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 pct=0'0 crt=57'487 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepRecovering
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 57)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:53:53.162396+0000 osd.2 (osd.2) 56 : cluster [DBG] 5.6 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:53:53.172870+0000 osd.2 (osd.2) 57 : cluster [DBG] 5.6 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 101 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 pct=0'0 crt=57'487 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.063830 1 0.000049
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 101 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 pct=0'0 crt=57'487 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 1318912 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746573 data_alloc: 218103808 data_used: 4907
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:24.617536+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.600705147s of 10.651138306s, submitted: 32
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 pct=0'0 crt=57'487 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.967675 1 0.000051
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 pct=0'0 crt=57'487 active+remapped mbc={}] exit Started/ReplicaActive 1.035690 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 pct=0'0 crt=57'487 active+remapped mbc={}] exit Started 2.039033 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[55,100)/1 pct=0'0 crt=57'487 active+remapped mbc={}] enter Reset
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 pct=0'0 crt=57'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 unknown mbc={}] exit Reset 0.000207 1 0.000277
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 unknown mbc={}] enter Started
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 unknown mbc={}] enter Start
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 unknown mbc={}] exit Start 0.000039 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 unknown mbc={}] enter Started/Primary
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000049 1 0.000129
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Feb 01 15:23:51 compute-0 ceph-osd[88066]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=100/101 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001303 3 0.000067
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=100/101 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=100/101 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000034 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 102 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=100/101 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.a scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 1236992 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.a scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 102 heartbeat osd_stat(store_statfs(0x4fceeb000/0x0/0x4ffc00000, data 0x99fdd/0x13f000, compress 0x0/0x0/0x0, omap 0xdabb, meta 0x2bc2545), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:25.617688+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:53:55.134397+0000 osd.2 (osd.2) 58 : cluster [DBG] 10.a scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:53:55.144850+0000 osd.2 (osd.2) 59 : cluster [DBG] 10.a scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 103 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=100/101 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002432 2 0.000130
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 103 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=100/101 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003907 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 103 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=100/101 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 103 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=102/103 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 59)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:53:55.134397+0000 osd.2 (osd.2) 58 : cluster [DBG] 10.a scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:53:55.144850+0000 osd.2 (osd.2) 59 : cluster [DBG] 10.a scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 103 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=102/103 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 103 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=102/103 n=6 ec=48/32 lis/c=102/55 les/c/f=103/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003832 3 0.000287
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 103 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=102/103 n=6 ec=48/32 lis/c=102/55 les/c/f=103/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 103 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=102/103 n=6 ec=48/32 lis/c=102/55 les/c/f=103/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 103 pg[9.19( v 57'487 (0'0,57'487] local-lis/les=102/103 n=6 ec=48/32 lis/c=102/55 les/c/f=103/56/0 sis=102) [2] r=0 lpr=102 pi=[55,102)/1 crt=57'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 103 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 68927488 unmapped: 188416 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:26.617898+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 68927488 unmapped: 188416 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:27.618014+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 147456 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:28.618162+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 147456 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 753638 data_alloc: 218103808 data_used: 4907
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:29.618326+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.c scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 10.c scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 68984832 unmapped: 131072 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:30.618444+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:00.024987+0000 osd.2 (osd.2) 60 : cluster [DBG] 10.c scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:00.035567+0000 osd.2 (osd.2) 61 : cluster [DBG] 10.c scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 61)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:00.024987+0000 osd.2 (osd.2) 60 : cluster [DBG] 10.c scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:00.035567+0000 osd.2 (osd.2) 61 : cluster [DBG] 10.c scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 122880 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fceea000/0x0/0x4ffc00000, data 0x9ba2c/0x142000, compress 0x0/0x0/0x0, omap 0xdd46, meta 0x2bc22ba), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:31.618638+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.e scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.e scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 122880 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:32.618781+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:02.051674+0000 osd.2 (osd.2) 62 : cluster [DBG] 5.e scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:02.062237+0000 osd.2 (osd.2) 63 : cluster [DBG] 5.e scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 63)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:02.051674+0000 osd.2 (osd.2) 62 : cluster [DBG] 5.e scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:02.062237+0000 osd.2 (osd.2) 63 : cluster [DBG] 5.e scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69001216 unmapped: 114688 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:33.618949+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69001216 unmapped: 114688 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 760772 data_alloc: 218103808 data_used: 4907
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fcee5000/0x0/0x4ffc00000, data 0x9d5c8/0x145000, compress 0x0/0x0/0x0, omap 0xdfd1, meta 0x2bc202f), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=79) [2] r=0 lpr=79 crt=57'487 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 43.935603 77 0.000345
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=79) [2] r=0 lpr=79 crt=57'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active 43.940417 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=79) [2] r=0 lpr=79 crt=57'487 mlcod 0'0 active mbc={}] exit Started/Primary 44.947390 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=79) [2] r=0 lpr=79 crt=57'487 mlcod 0'0 active mbc={}] exit Started 44.947452 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=79) [2] r=0 lpr=79 crt=57'487 mlcod 0'0 active mbc={}] enter Reset
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105 pruub=12.064671516s) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 active pruub 187.699172974s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105 pruub=12.064629555s) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 unknown NOTIFY pruub 187.699172974s@ mbc={}] exit Reset 0.000079 1 0.000131
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105 pruub=12.064629555s) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 unknown NOTIFY pruub 187.699172974s@ mbc={}] enter Started
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105 pruub=12.064629555s) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 unknown NOTIFY pruub 187.699172974s@ mbc={}] enter Start
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105 pruub=12.064629555s) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 unknown NOTIFY pruub 187.699172974s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105 pruub=12.064629555s) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 unknown NOTIFY pruub 187.699172974s@ mbc={}] exit Start 0.000009 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 105 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105 pruub=12.064629555s) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 unknown NOTIFY pruub 187.699172974s@ mbc={}] enter Started/Stray
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 105 handle_osd_map epochs [105,105], i have 105, src has [1,105]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:34.619097+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.079073906s of 10.115900993s, submitted: 19
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 unknown NOTIFY mbc={}] exit Started/Stray 0.802265 3 0.000164
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 unknown NOTIFY mbc={}] exit Started 0.802306 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=-1 lpr=105 pi=[79,105)/1 crt=57'487 unknown NOTIFY mbc={}] enter Reset
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped mbc={}] exit Reset 0.000075 1 0.000104
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped mbc={}] enter Started
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped mbc={}] enter Start
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped mbc={}] exit Start 0.000006 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped mbc={}] enter Started/Primary
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 1 0.000039
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000030 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 106 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.d scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.d scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 81920 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:35.619247+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:05.046699+0000 osd.2 (osd.2) 64 : cluster [DBG] 5.d scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:05.057353+0000 osd.2 (osd.2) 65 : cluster [DBG] 5.d scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 106 handle_osd_map epochs [106,107], i have 106, src has [1,107]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 106 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011911 4 0.000081
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.012051 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=79/80 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 65)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:05.046699+0000 osd.2 (osd.2) 64 : cluster [DBG] 5.d scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:05.057353+0000 osd.2 (osd.2) 65 : cluster [DBG] 5.d scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.004974 5 0.000388
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000136 1 0.000064
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000519 1 0.000190
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 57'487 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.063635 2 0.000106
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 57'487 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69115904 unmapped: 0 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:36.619554+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 107 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 57'487 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.957100 1 0.000079
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 57'487 active+remapped mbc={255={}}] exit Started/Primary/Active 1.026742 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 57'487 active+remapped mbc={255={}}] exit Started/Primary 2.038831 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 57'487 active+remapped mbc={255={}}] exit Started 2.038865 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[79,106)/1 crt=57'487 mlcod 57'487 active+remapped mbc={255={}}] enter Reset
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108 pruub=14.977913857s) [0] async=[0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 active pruub 193.453887939s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108 pruub=14.977710724s) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY pruub 193.453887939s@ mbc={}] exit Reset 0.000368 1 0.000452
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108 pruub=14.977710724s) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY pruub 193.453887939s@ mbc={}] enter Started
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108 pruub=14.977710724s) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY pruub 193.453887939s@ mbc={}] enter Start
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108 pruub=14.977710724s) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY pruub 193.453887939s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108 pruub=14.977710724s) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY pruub 193.453887939s@ mbc={}] exit Start 0.000041 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108 pruub=14.977710724s) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY pruub 193.453887939s@ mbc={}] enter Started/Stray
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 983040 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:37.619674+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 983040 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY mbc={}] exit Started/Stray 1.265489 6 0.000170
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY mbc={}] enter Started/ToDelete
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001688 2 0.000079
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] lb MIN local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=-1 lpr=108 DELETING pi=[79,108)/1 crt=57'487 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.069028 2 0.000378
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] lb MIN local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY mbc={}] exit Started/ToDelete 0.070794 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] lb MIN local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=-1 lpr=108 pi=[79,108)/1 crt=57'487 unknown NOTIFY mbc={}] exit Started 1.336395 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:38.619937+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fced9000/0x0/0x4ffc00000, data 0xa40b1/0x151000, compress 0x0/0x0/0x0, omap 0xe9fd, meta 0x2bc1603), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 764285 data_alloc: 218103808 data_used: 4907
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 942080 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fced7000/0x0/0x4ffc00000, data 0xa595e/0x151000, compress 0x0/0x0/0x0, omap 0xec88, meta 0x2bc1378), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:39.620111+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:09.079965+0000 osd.2 (osd.2) 66 : cluster [DBG] 5.1c scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:09.090516+0000 osd.2 (osd.2) 67 : cluster [DBG] 5.1c scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 67)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:09.079965+0000 osd.2 (osd.2) 66 : cluster [DBG] 5.1c scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:09.090516+0000 osd.2 (osd.2) 67 : cluster [DBG] 5.1c scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 942080 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fced7000/0x0/0x4ffc00000, data 0xa595e/0x151000, compress 0x0/0x0/0x0, omap 0xec88, meta 0x2bc1378), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:40.620431+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 933888 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:41.620674+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 909312 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:42.620921+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:12.059712+0000 osd.2 (osd.2) 68 : cluster [DBG] 5.1b scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:12.070382+0000 osd.2 (osd.2) 69 : cluster [DBG] 5.1b scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 69)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:12.059712+0000 osd.2 (osd.2) 68 : cluster [DBG] 5.1b scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:12.070382+0000 osd.2 (osd.2) 69 : cluster [DBG] 5.1b scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 909312 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:43.621196+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 109 handle_osd_map epochs [110,111], i have 109, src has [1,111]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=65) [2] r=0 lpr=65 crt=57'485 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 77.680807 137 0.000527
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=65) [2] r=0 lpr=65 crt=57'485 mlcod 0'0 active mbc={}] exit Started/Primary/Active 77.686886 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=65) [2] r=0 lpr=65 crt=57'485 mlcod 0'0 active mbc={}] exit Started/Primary 78.705678 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=65) [2] r=0 lpr=65 crt=57'485 mlcod 0'0 active mbc={}] exit Started 78.705714 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=65) [2] r=0 lpr=65 crt=57'485 mlcod 0'0 active mbc={}] enter Reset
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=10.319766998s) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 active pruub 195.875000000s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=10.319724083s) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 unknown NOTIFY pruub 195.875000000s@ mbc={}] exit Reset 0.000084 1 0.000128
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=10.319724083s) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 unknown NOTIFY pruub 195.875000000s@ mbc={}] enter Started
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=10.319724083s) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 unknown NOTIFY pruub 195.875000000s@ mbc={}] enter Start
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=10.319724083s) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 unknown NOTIFY pruub 195.875000000s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=10.319724083s) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 unknown NOTIFY pruub 195.875000000s@ mbc={}] exit Start 0.000009 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 111 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111 pruub=10.319724083s) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 unknown NOTIFY pruub 195.875000000s@ mbc={}] enter Started/Stray
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 111 handle_osd_map epochs [110,111], i have 111, src has [1,111]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 772342 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 909312 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:44.621407+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.003470421s of 10.054692268s, submitted: 31
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 unknown NOTIFY mbc={}] exit Started/Stray 1.013615 3 0.000043
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 unknown NOTIFY mbc={}] exit Started 1.013702 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=-1 lpr=111 pi=[65,111)/1 crt=57'485 unknown NOTIFY mbc={}] enter Reset
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped mbc={}] exit Reset 0.000142 1 0.000216
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped mbc={}] enter Started
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped mbc={}] enter Start
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped mbc={}] exit Start 0.000006 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped mbc={}] enter Started/Primary
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000031 1 0.000051
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000096 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 112 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69263360 unmapped: 901120 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:45.621607+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:15.065985+0000 osd.2 (osd.2) 70 : cluster [DBG] 2.1e scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:15.076466+0000 osd.2 (osd.2) 71 : cluster [DBG] 2.1e scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 112 heartbeat osd_stat(store_statfs(0x4fced3000/0x0/0x4ffc00000, data 0xa9096/0x157000, compress 0x0/0x0/0x0, omap 0xef13, meta 0x2bc10ed), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 71)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:15.065985+0000 osd.2 (osd.2) 70 : cluster [DBG] 2.1e scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:15.076466+0000 osd.2 (osd.2) 71 : cluster [DBG] 2.1e scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.015991 4 0.000127
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.016193 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=65/66 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 113 handle_osd_map epochs [112,113], i have 113, src has [1,113]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 884736 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.252483 5 0.000380
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000093 1 0.000075
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000338 1 0.000037
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 57'485 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.044640 2 0.000046
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 57'485 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:46.622006+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 113 handle_osd_map epochs [114,114], i have 114, src has [1,114]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 57'485 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.709325 1 0.000091
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 57'485 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007180 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 57'485 active+remapped mbc={255={}}] exit Started/Primary 2.023417 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 57'485 active+remapped mbc={255={}}] exit Started 2.023451 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[65,112)/1 crt=57'485 mlcod 57'485 active+remapped mbc={255={}}] enter Reset
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.244387627s) [0] async=[0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 active pruub 203.837112427s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.244242668s) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY pruub 203.837112427s@ mbc={}] exit Reset 0.000261 1 0.000378
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.244242668s) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY pruub 203.837112427s@ mbc={}] enter Started
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.244242668s) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY pruub 203.837112427s@ mbc={}] enter Start
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.244242668s) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY pruub 203.837112427s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.244242668s) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY pruub 203.837112427s@ mbc={}] exit Start 0.000014 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114 pruub=15.244242668s) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY pruub 203.837112427s@ mbc={}] enter Started/Stray
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69296128 unmapped: 868352 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:47.622144+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 114 handle_osd_map epochs [114,115], i have 114, src has [1,115]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY mbc={}] exit Started/Stray 1.014347 7 0.000120
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY mbc={}] enter Started/ToDelete
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000109 1 0.000090
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] lb MIN local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=-1 lpr=114 DELETING pi=[65,114)/1 crt=57'485 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.047704 2 0.000323
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] lb MIN local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY mbc={}] exit Started/ToDelete 0.047921 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] lb MIN local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=-1 lpr=114 pi=[65,114)/1 crt=57'485 unknown NOTIFY mbc={}] exit Started 1.062361 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69296128 unmapped: 868352 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:48.622273+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 775463 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 802816 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:49.622435+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 802816 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:50.622622+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcec4000/0x0/0x4ffc00000, data 0xaf99a/0x162000, compress 0x0/0x0/0x0, omap 0xf93f, meta 0x2bc06c1), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 802816 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:51.622787+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 115 handle_osd_map epochs [115,116], i have 115, src has [1,116]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=38'483 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 84.769833 150 0.000511
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=38'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active 84.772597 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=38'483 mlcod 0'0 active mbc={}] exit Started/Primary 85.781004 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=38'483 mlcod 0'0 active mbc={}] exit Started 85.781064 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=66) [2] r=0 lpr=66 crt=38'483 mlcod 0'0 active mbc={}] enter Reset
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116 pruub=11.231036186s) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 active pruub 204.880294800s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116 pruub=11.230973244s) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 unknown NOTIFY pruub 204.880294800s@ mbc={}] exit Reset 0.000120 1 0.000206
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116 pruub=11.230973244s) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 unknown NOTIFY pruub 204.880294800s@ mbc={}] enter Started
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116 pruub=11.230973244s) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 unknown NOTIFY pruub 204.880294800s@ mbc={}] enter Start
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116 pruub=11.230973244s) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 unknown NOTIFY pruub 204.880294800s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116 pruub=11.230973244s) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 unknown NOTIFY pruub 204.880294800s@ mbc={}] exit Start 0.000014 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 116 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116 pruub=11.230973244s) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 unknown NOTIFY pruub 204.880294800s@ mbc={}] enter Started/Stray
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 116 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 802816 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:52.622992+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:22.001627+0000 osd.2 (osd.2) 72 : cluster [DBG] 8.15 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:22.012544+0000 osd.2 (osd.2) 73 : cluster [DBG] 8.15 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 73)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:22.001627+0000 osd.2 (osd.2) 72 : cluster [DBG] 8.15 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:22.012544+0000 osd.2 (osd.2) 73 : cluster [DBG] 8.15 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 116 handle_osd_map epochs [117,117], i have 116, src has [1,117]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 unknown NOTIFY mbc={}] exit Started/Stray 1.022983 3 0.000067
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 unknown NOTIFY mbc={}] exit Started 1.023036 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=-1 lpr=116 pi=[66,116)/1 crt=38'483 unknown NOTIFY mbc={}] enter Reset
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped mbc={}] exit Reset 0.000083 1 0.000120
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped mbc={}] enter Started
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped mbc={}] enter Start
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped mbc={}] exit Start 0.000007 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped mbc={}] enter Started/Primary
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 1 0.000045
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000066 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 117 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 1843200 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:53.623177+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 782960 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011094 4 0.000105
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.011263 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=66/67 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 1843200 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:54.623418+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 118 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.596240 5 0.000849
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000221 1 0.000121
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000433 1 0.000046
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 38'483 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.039138 2 0.000081
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 38'483 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.136539459s of 10.210332870s, submitted: 32
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 118 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 38'483 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.381405 1 0.000071
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 38'483 active+remapped mbc={255={}}] exit Started/Primary/Active 1.018112 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 38'483 active+remapped mbc={255={}}] exit Started/Primary 2.029400 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 38'483 active+remapped mbc={255={}}] exit Started 2.029427 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[66,117)/1 crt=38'483 mlcod 38'483 active+remapped mbc={255={}}] enter Reset
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119 pruub=15.578289032s) [1] async=[1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 active pruub 212.280242920s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119 pruub=15.578214645s) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY pruub 212.280242920s@ mbc={}] exit Reset 0.000117 1 0.000186
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119 pruub=15.578214645s) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY pruub 212.280242920s@ mbc={}] enter Started
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119 pruub=15.578214645s) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY pruub 212.280242920s@ mbc={}] enter Start
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119 pruub=15.578214645s) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY pruub 212.280242920s@ mbc={}] state<Start>: transitioning to Stray
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119 pruub=15.578214645s) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY pruub 212.280242920s@ mbc={}] exit Start 0.000009 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119 pruub=15.578214645s) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY pruub 212.280242920s@ mbc={}] enter Started/Stray
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 1835008 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:55.623577+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:25.021790+0000 osd.2 (osd.2) 74 : cluster [DBG] 7.1a scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:25.032475+0000 osd.2 (osd.2) 75 : cluster [DBG] 7.1a scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 1835008 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY mbc={}] exit Started/Stray 1.012802 7 0.000102
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY mbc={}] enter Started/ToDelete
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000121 1 0.000084
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 75)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:25.021790+0000 osd.2 (osd.2) 74 : cluster [DBG] 7.1a scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:25.032475+0000 osd.2 (osd.2) 75 : cluster [DBG] 7.1a scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] lb MIN local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=-1 lpr=119 DELETING pi=[66,119)/1 crt=38'483 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.039371 2 0.000244
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] lb MIN local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY mbc={}] exit Started/ToDelete 0.039567 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] lb MIN local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=-1 lpr=119 pi=[66,119)/1 crt=38'483 unknown NOTIFY mbc={}] exit Started 1.052435 0 0.000000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb6483/0x16e000, compress 0x0/0x0/0x0, omap 0x1036b, meta 0x2bbfc95), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:56.623795+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69410816 unmapped: 1802240 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:57.623944+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:27.017140+0000 osd.2 (osd.2) 76 : cluster [DBG] 11.15 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:27.027701+0000 osd.2 (osd.2) 77 : cluster [DBG] 11.15 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69427200 unmapped: 1785856 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 77)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:27.017140+0000 osd.2 (osd.2) 76 : cluster [DBG] 11.15 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:27.027701+0000 osd.2 (osd.2) 77 : cluster [DBG] 11.15 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:58.624122+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:28.043576+0000 osd.2 (osd.2) 78 : cluster [DBG] 4.18 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:28.054181+0000 osd.2 (osd.2) 79 : cluster [DBG] 4.18 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 788020 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 1777664 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 79)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:28.043576+0000 osd.2 (osd.2) 78 : cluster [DBG] 4.18 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:28.054181+0000 osd.2 (osd.2) 79 : cluster [DBG] 4.18 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:59.624381+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69419008 unmapped: 1794048 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:00.624515+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:30.072222+0000 osd.2 (osd.2) 80 : cluster [DBG] 4.1b scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:30.082839+0000 osd.2 (osd.2) 81 : cluster [DBG] 4.1b scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 1777664 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 81)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:30.072222+0000 osd.2 (osd.2) 80 : cluster [DBG] 4.1b scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:30.082839+0000 osd.2 (osd.2) 81 : cluster [DBG] 4.1b scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:01.624743+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:31.087754+0000 osd.2 (osd.2) 82 : cluster [DBG] 11.3 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:31.098511+0000 osd.2 (osd.2) 83 : cluster [DBG] 11.3 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 1777664 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 83)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:31.087754+0000 osd.2 (osd.2) 82 : cluster [DBG] 11.3 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:31.098511+0000 osd.2 (osd.2) 83 : cluster [DBG] 11.3 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:02.625028+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69443584 unmapped: 1769472 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:03.625154+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:33.102163+0000 osd.2 (osd.2) 84 : cluster [DBG] 4.1a scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:33.112719+0000 osd.2 (osd.2) 85 : cluster [DBG] 4.1a scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 795259 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69443584 unmapped: 1769472 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 85)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:33.102163+0000 osd.2 (osd.2) 84 : cluster [DBG] 4.1a scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:33.112719+0000 osd.2 (osd.2) 85 : cluster [DBG] 4.1a scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:04.625398+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69443584 unmapped: 1769472 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:05.625619+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:35.016179+0000 osd.2 (osd.2) 86 : cluster [DBG] 8.11 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:35.026774+0000 osd.2 (osd.2) 87 : cluster [DBG] 8.11 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69451776 unmapped: 1761280 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 87)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:35.016179+0000 osd.2 (osd.2) 86 : cluster [DBG] 8.11 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:35.026774+0000 osd.2 (osd.2) 87 : cluster [DBG] 8.11 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:06.625852+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69451776 unmapped: 1761280 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:07.626002+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69451776 unmapped: 1761280 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:08.626163+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 797672 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69459968 unmapped: 1753088 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:09.626358+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.676446915s of 14.723983765s, submitted: 18
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69459968 unmapped: 1753088 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:10.626511+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:40.000290+0000 osd.2 (osd.2) 88 : cluster [DBG] 3.8 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:40.010689+0000 osd.2 (osd.2) 89 : cluster [DBG] 3.8 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 89)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:40.000290+0000 osd.2 (osd.2) 88 : cluster [DBG] 3.8 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:40.010689+0000 osd.2 (osd.2) 89 : cluster [DBG] 3.8 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.c scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.c scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69492736 unmapped: 1720320 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:11.626721+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:41.037156+0000 osd.2 (osd.2) 90 : cluster [DBG] 7.c scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:41.047710+0000 osd.2 (osd.2) 91 : cluster [DBG] 7.c scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 91)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:41.037156+0000 osd.2 (osd.2) 90 : cluster [DBG] 7.c scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:41.047710+0000 osd.2 (osd.2) 91 : cluster [DBG] 7.c scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69492736 unmapped: 1720320 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:12.627165+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 1712128 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:13.627373+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 802494 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 1712128 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:14.627546+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 1712128 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:15.627694+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.d scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.d scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 1703936 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:16.628048+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:46.115023+0000 osd.2 (osd.2) 92 : cluster [DBG] 11.d scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:46.125720+0000 osd.2 (osd.2) 93 : cluster [DBG] 11.d scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 93)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:46.115023+0000 osd.2 (osd.2) 92 : cluster [DBG] 11.d scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:46.125720+0000 osd.2 (osd.2) 93 : cluster [DBG] 11.d scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 1703936 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:17.628292+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 1695744 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:18.629576+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 804907 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 1695744 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:19.630111+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 1695744 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:20.630345+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 1687552 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:21.630507+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 1687552 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:22.630949+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 1679360 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:23.631606+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 804907 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 1679360 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:24.631764+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 1679360 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:25.632343+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69541888 unmapped: 1671168 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:26.632944+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69541888 unmapped: 1671168 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:27.633400+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 1662976 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:28.633886+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.070312500s of 19.081048965s, submitted: 6
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 807318 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 1662976 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:29.634117+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:59.081479+0000 osd.2 (osd.2) 94 : cluster [DBG] 3.7 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:54:59.092072+0000 osd.2 (osd.2) 95 : cluster [DBG] 3.7 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 95)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:59.081479+0000 osd.2 (osd.2) 94 : cluster [DBG] 3.7 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:54:59.092072+0000 osd.2 (osd.2) 95 : cluster [DBG] 3.7 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69558272 unmapped: 1654784 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:30.634390+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69566464 unmapped: 1646592 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:31.634654+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1638400 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:32.634888+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:01.983092+0000 osd.2 (osd.2) 96 : cluster [DBG] 3.5 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:01.993675+0000 osd.2 (osd.2) 97 : cluster [DBG] 3.5 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 97)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:01.983092+0000 osd.2 (osd.2) 96 : cluster [DBG] 3.5 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:01.993675+0000 osd.2 (osd.2) 97 : cluster [DBG] 3.5 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1638400 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:33.635257+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 809729 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1630208 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:34.635389+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1630208 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:35.635629+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1622016 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:36.635869+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1622016 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:37.636023+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:07.036009+0000 osd.2 (osd.2) 98 : cluster [DBG] 7.1 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:07.046546+0000 osd.2 (osd.2) 99 : cluster [DBG] 7.1 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 99)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:07.036009+0000 osd.2 (osd.2) 98 : cluster [DBG] 7.1 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:07.046546+0000 osd.2 (osd.2) 99 : cluster [DBG] 7.1 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1622016 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:38.636273+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.b scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.b scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 814553 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69599232 unmapped: 1613824 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:39.636430+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:09.015623+0000 osd.2 (osd.2) 100 : cluster [DBG] 11.b scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:09.026180+0000 osd.2 (osd.2) 101 : cluster [DBG] 11.b scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 101)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:09.015623+0000 osd.2 (osd.2) 100 : cluster [DBG] 11.b scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:09.026180+0000 osd.2 (osd.2) 101 : cluster [DBG] 11.b scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69599232 unmapped: 1613824 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:40.636627+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 1605632 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:41.636775+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 1605632 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:42.636944+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69615616 unmapped: 1597440 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:43.637068+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 814553 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1589248 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:44.637215+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1589248 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:45.637526+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 1589248 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:46.637709+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.e scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.962125778s of 17.975889206s, submitted: 8
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.e scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 1572864 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:47.637935+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:17.057257+0000 osd.2 (osd.2) 102 : cluster [DBG] 4.e scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:17.067772+0000 osd.2 (osd.2) 103 : cluster [DBG] 4.e scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 103)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:17.057257+0000 osd.2 (osd.2) 102 : cluster [DBG] 4.e scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:17.067772+0000 osd.2 (osd.2) 103 : cluster [DBG] 4.e scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 1572864 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:48.638186+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 816964 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1564672 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:49.638359+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1564672 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:50.638538+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1556480 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:51.638735+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1556480 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:52.638990+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1556480 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:53.639131+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 816964 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 1548288 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:54.639261+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 1548288 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:55.639471+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:25.165806+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.2 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:25.176498+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.2 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 105)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:25.165806+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.2 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:25.176498+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.2 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69672960 unmapped: 1540096 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:56.639677+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.124578476s of 10.131445885s, submitted: 4
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69672960 unmapped: 1540096 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:57.639803+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:27.188468+0000 osd.2 (osd.2) 106 : cluster [DBG] 8.2 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:27.199010+0000 osd.2 (osd.2) 107 : cluster [DBG] 8.2 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 107)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:27.188468+0000 osd.2 (osd.2) 106 : cluster [DBG] 8.2 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:27.199010+0000 osd.2 (osd.2) 107 : cluster [DBG] 8.2 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69672960 unmapped: 1540096 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:58.639986+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 824197 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 1523712 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:59.640112+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:29.241386+0000 osd.2 (osd.2) 108 : cluster [DBG] 4.1 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:29.251919+0000 osd.2 (osd.2) 109 : cluster [DBG] 4.1 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 109)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:29.241386+0000 osd.2 (osd.2) 108 : cluster [DBG] 4.1 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:29.251919+0000 osd.2 (osd.2) 109 : cluster [DBG] 4.1 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 1523712 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:00.640415+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 1515520 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:01.640543+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 1515520 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:02.640734+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 1515520 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:03.640918+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826608 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69705728 unmapped: 1507328 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:04.641060+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:34.251690+0000 osd.2 (osd.2) 110 : cluster [DBG] 7.5 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:34.262073+0000 osd.2 (osd.2) 111 : cluster [DBG] 7.5 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 111)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:34.251690+0000 osd.2 (osd.2) 110 : cluster [DBG] 7.5 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:34.262073+0000 osd.2 (osd.2) 111 : cluster [DBG] 7.5 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69705728 unmapped: 1507328 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:05.641281+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 1499136 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:06.641432+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:36.222517+0000 osd.2 (osd.2) 112 : cluster [DBG] 11.2 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:36.233097+0000 osd.2 (osd.2) 113 : cluster [DBG] 11.2 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 113)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:36.222517+0000 osd.2 (osd.2) 112 : cluster [DBG] 11.2 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:36.233097+0000 osd.2 (osd.2) 113 : cluster [DBG] 11.2 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 1490944 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:07.641627+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.085702896s of 11.097998619s, submitted: 8
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 1490944 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:08.641761+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:38.286852+0000 osd.2 (osd.2) 114 : cluster [DBG] 11.9 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:38.301031+0000 osd.2 (osd.2) 115 : cluster [DBG] 11.9 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 115)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:38.286852+0000 osd.2 (osd.2) 114 : cluster [DBG] 11.9 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:38.301031+0000 osd.2 (osd.2) 115 : cluster [DBG] 11.9 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.d scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.d scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 833845 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 1482752 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:09.642013+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:39.300127+0000 osd.2 (osd.2) 116 : cluster [DBG] 8.d scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:39.310648+0000 osd.2 (osd.2) 117 : cluster [DBG] 8.d scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 117)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:39.300127+0000 osd.2 (osd.2) 116 : cluster [DBG] 8.d scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:39.310648+0000 osd.2 (osd.2) 117 : cluster [DBG] 8.d scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 1474560 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:10.642229+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:40.289459+0000 osd.2 (osd.2) 118 : cluster [DBG] 8.4 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:40.300002+0000 osd.2 (osd.2) 119 : cluster [DBG] 8.4 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 119)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:40.289459+0000 osd.2 (osd.2) 118 : cluster [DBG] 8.4 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:40.300002+0000 osd.2 (osd.2) 119 : cluster [DBG] 8.4 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.a scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.a scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 1474560 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:11.642429+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:41.298514+0000 osd.2 (osd.2) 120 : cluster [DBG] 4.a scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:41.308851+0000 osd.2 (osd.2) 121 : cluster [DBG] 4.a scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 121)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:41.298514+0000 osd.2 (osd.2) 120 : cluster [DBG] 4.a scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:41.308851+0000 osd.2 (osd.2) 121 : cluster [DBG] 4.a scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 1458176 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:12.642727+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:42.323082+0000 osd.2 (osd.2) 122 : cluster [DBG] 7.8 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:42.333522+0000 osd.2 (osd.2) 123 : cluster [DBG] 7.8 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 123)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:42.323082+0000 osd.2 (osd.2) 122 : cluster [DBG] 7.8 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:42.333522+0000 osd.2 (osd.2) 123 : cluster [DBG] 7.8 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:13.642926+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69754880 unmapped: 1458176 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841078 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:14.643167+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1449984 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.a scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.a scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:15.643328+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:45.303252+0000 osd.2 (osd.2) 124 : cluster [DBG] 7.a scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:45.313475+0000 osd.2 (osd.2) 125 : cluster [DBG] 7.a scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1449984 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 125)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:45.303252+0000 osd.2 (osd.2) 124 : cluster [DBG] 7.a scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:45.313475+0000 osd.2 (osd.2) 125 : cluster [DBG] 7.a scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:16.643552+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1441792 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1441792 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:18.095565+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1441792 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:19.095731+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.029262543s of 11.049038887s, submitted: 12
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 845902 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69795840 unmapped: 1417216 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:20.095883+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:49.335926+0000 osd.2 (osd.2) 126 : cluster [DBG] 11.8 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:49.346523+0000 osd.2 (osd.2) 127 : cluster [DBG] 11.8 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 127)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:49.335926+0000 osd.2 (osd.2) 126 : cluster [DBG] 11.8 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:49.346523+0000 osd.2 (osd.2) 127 : cluster [DBG] 11.8 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69795840 unmapped: 1417216 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:21.096071+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.e scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.e scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69804032 unmapped: 1409024 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:22.096458+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:51.299240+0000 osd.2 (osd.2) 128 : cluster [DBG] 7.e scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:51.309814+0000 osd.2 (osd.2) 129 : cluster [DBG] 7.e scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 129)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:51.299240+0000 osd.2 (osd.2) 128 : cluster [DBG] 7.e scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:51.309814+0000 osd.2 (osd.2) 129 : cluster [DBG] 7.e scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69804032 unmapped: 1409024 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:23.097053+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1400832 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:24.100244+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848313 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1400832 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:25.100415+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1400832 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:26.100529+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 1384448 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:27.100643+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:56.291649+0000 osd.2 (osd.2) 130 : cluster [DBG] 7.15 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:56.302192+0000 osd.2 (osd.2) 131 : cluster [DBG] 7.15 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 131)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:56.291649+0000 osd.2 (osd.2) 130 : cluster [DBG] 7.15 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:56.302192+0000 osd.2 (osd.2) 131 : cluster [DBG] 7.15 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1376256 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:28.100871+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:57.248733+0000 osd.2 (osd.2) 132 : cluster [DBG] 11.18 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:55:57.259398+0000 osd.2 (osd.2) 133 : cluster [DBG] 11.18 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 133)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:57.248733+0000 osd.2 (osd.2) 132 : cluster [DBG] 11.18 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:55:57.259398+0000 osd.2 (osd.2) 133 : cluster [DBG] 11.18 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1368064 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:29.101072+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853141 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1368064 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:30.101267+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.821199417s of 10.835725784s, submitted: 8
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1351680 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:31.101493+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:00.171546+0000 osd.2 (osd.2) 134 : cluster [DBG] 11.1a scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:00.181993+0000 osd.2 (osd.2) 135 : cluster [DBG] 11.1a scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1343488 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 135)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:00.171546+0000 osd.2 (osd.2) 134 : cluster [DBG] 11.1a scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:00.181993+0000 osd.2 (osd.2) 135 : cluster [DBG] 11.1a scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:32.101815+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1343488 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:33.102063+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:02.186603+0000 osd.2 (osd.2) 136 : cluster [DBG] 11.1b scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:02.197104+0000 osd.2 (osd.2) 137 : cluster [DBG] 11.1b scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69877760 unmapped: 1335296 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 137)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:02.186603+0000 osd.2 (osd.2) 136 : cluster [DBG] 11.1b scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:02.197104+0000 osd.2 (osd.2) 137 : cluster [DBG] 11.1b scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:34.102367+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857971 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69902336 unmapped: 1310720 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:35.102562+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 1302528 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:36.102698+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 1302528 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:37.102928+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69918720 unmapped: 1294336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:38.103078+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:07.178581+0000 osd.2 (osd.2) 138 : cluster [DBG] 8.1b scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:07.189149+0000 osd.2 (osd.2) 139 : cluster [DBG] 8.1b scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 1286144 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 139)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:07.178581+0000 osd.2 (osd.2) 138 : cluster [DBG] 8.1b scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:07.189149+0000 osd.2 (osd.2) 139 : cluster [DBG] 8.1b scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:39.103326+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:08.195447+0000 osd.2 (osd.2) 140 : cluster [DBG] 4.13 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:08.206051+0000 osd.2 (osd.2) 141 : cluster [DBG] 4.13 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862797 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 1286144 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 141)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:08.195447+0000 osd.2 (osd.2) 140 : cluster [DBG] 4.13 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:08.206051+0000 osd.2 (osd.2) 141 : cluster [DBG] 4.13 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:40.103497+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.051155090s of 10.064999580s, submitted: 8
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69918720 unmapped: 1294336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:41.103620+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:10.236724+0000 osd.2 (osd.2) 142 : cluster [DBG] 11.12 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:10.247274+0000 osd.2 (osd.2) 143 : cluster [DBG] 11.12 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69918720 unmapped: 1294336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 143)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:10.236724+0000 osd.2 (osd.2) 142 : cluster [DBG] 11.12 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:10.247274+0000 osd.2 (osd.2) 143 : cluster [DBG] 11.12 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:42.103780+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 1286144 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:43.103970+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 1269760 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:44.104123+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:13.268537+0000 osd.2 (osd.2) 144 : cluster [DBG] 11.1c scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:13.279085+0000 osd.2 (osd.2) 145 : cluster [DBG] 11.1c scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 867627 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1261568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 145)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:13.268537+0000 osd.2 (osd.2) 144 : cluster [DBG] 11.1c scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:13.279085+0000 osd.2 (osd.2) 145 : cluster [DBG] 11.1c scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:45.104320+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1261568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:46.104433+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:15.270089+0000 osd.2 (osd.2) 146 : cluster [DBG] 3.16 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:15.280855+0000 osd.2 (osd.2) 147 : cluster [DBG] 3.16 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1261568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 147)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:15.270089+0000 osd.2 (osd.2) 146 : cluster [DBG] 3.16 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:15.280855+0000 osd.2 (osd.2) 147 : cluster [DBG] 3.16 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:47.104597+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69976064 unmapped: 1236992 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:48.104859+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:17.252020+0000 osd.2 (osd.2) 148 : cluster [DBG] 11.1e scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:17.262614+0000 osd.2 (osd.2) 149 : cluster [DBG] 11.1e scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 1228800 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 149)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:17.252020+0000 osd.2 (osd.2) 148 : cluster [DBG] 11.1e scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:17.262614+0000 osd.2 (osd.2) 149 : cluster [DBG] 11.1e scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:49.105039+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 872455 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 1228800 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:50.105180+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 1220608 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:51.105349+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 1220608 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:52.105484+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1212416 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:53.105635+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1212416 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:54.105910+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 872455 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70008832 unmapped: 1204224 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:55.106055+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70025216 unmapped: 1187840 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:56.106197+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70025216 unmapped: 1187840 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:57.106343+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1179648 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:58.106461+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.166114807s of 18.179595947s, submitted: 8
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1179648 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:59.106600+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:28.415597+0000 osd.2 (osd.2) 150 : cluster [DBG] 8.1c scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:28.425717+0000 osd.2 (osd.2) 151 : cluster [DBG] 8.1c scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 151)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:28.415597+0000 osd.2 (osd.2) 150 : cluster [DBG] 8.1c scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:28.425717+0000 osd.2 (osd.2) 151 : cluster [DBG] 8.1c scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877283 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1179648 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:00.106854+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:29.377059+0000 osd.2 (osd.2) 152 : cluster [DBG] 11.1f scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:29.387674+0000 osd.2 (osd.2) 153 : cluster [DBG] 11.1f scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 153)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:29.377059+0000 osd.2 (osd.2) 152 : cluster [DBG] 11.1f scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:29.387674+0000 osd.2 (osd.2) 153 : cluster [DBG] 11.1f scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 1171456 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:01.107047+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:30.384457+0000 osd.2 (osd.2) 154 : cluster [DBG] 4.11 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:30.395045+0000 osd.2 (osd.2) 155 : cluster [DBG] 4.11 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 155)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:30.384457+0000 osd.2 (osd.2) 154 : cluster [DBG] 4.11 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:30.395045+0000 osd.2 (osd.2) 155 : cluster [DBG] 4.11 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 1171456 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:02.107212+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:31.350336+0000 osd.2 (osd.2) 156 : cluster [DBG] 11.11 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:31.364398+0000 osd.2 (osd.2) 157 : cluster [DBG] 11.11 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 157)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:31.350336+0000 osd.2 (osd.2) 156 : cluster [DBG] 11.11 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:31.364398+0000 osd.2 (osd.2) 157 : cluster [DBG] 11.11 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 1163264 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:03.107651+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:32.306067+0000 osd.2 (osd.2) 158 : cluster [DBG] 3.11 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:32.316756+0000 osd.2 (osd.2) 159 : cluster [DBG] 3.11 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 159)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:32.306067+0000 osd.2 (osd.2) 158 : cluster [DBG] 3.11 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:32.316756+0000 osd.2 (osd.2) 159 : cluster [DBG] 3.11 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 1163264 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:04.107828+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:33.304830+0000 osd.2 (osd.2) 160 : cluster [DBG] 8.12 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:33.315387+0000 osd.2 (osd.2) 161 : cluster [DBG] 8.12 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 161)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:33.304830+0000 osd.2 (osd.2) 160 : cluster [DBG] 8.12 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:33.315387+0000 osd.2 (osd.2) 161 : cluster [DBG] 8.12 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886937 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 1163264 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:05.108032+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 1155072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:06.108158+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 1155072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:07.108516+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1146880 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:08.108643+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1146880 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:09.108844+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886937 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1146880 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:10.109011+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 1138688 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:11.109166+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.858505249s of 12.879987717s, submitted: 12
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 1138688 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:12.109383+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:41.296291+0000 osd.2 (osd.2) 162 : cluster [DBG] 3.18 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:41.306833+0000 osd.2 (osd.2) 163 : cluster [DBG] 3.18 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1130496 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 163)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:41.296291+0000 osd.2 (osd.2) 162 : cluster [DBG] 3.18 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:41.306833+0000 osd.2 (osd.2) 163 : cluster [DBG] 3.18 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:13.109589+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1130496 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:14.109710+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889350 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70090752 unmapped: 1122304 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:15.109849+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.e scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 3.e scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1105920 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:16.109961+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:45.258493+0000 osd.2 (osd.2) 164 : cluster [DBG] 3.e scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:45.268919+0000 osd.2 (osd.2) 165 : cluster [DBG] 3.e scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1105920 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 165)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:45.258493+0000 osd.2 (osd.2) 164 : cluster [DBG] 3.e scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:45.268919+0000 osd.2 (osd.2) 165 : cluster [DBG] 3.e scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:17.110124+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:46.217087+0000 osd.2 (osd.2) 166 : cluster [DBG] 7.1c scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:46.227634+0000 osd.2 (osd.2) 167 : cluster [DBG] 7.1c scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1097728 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:18.110259+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 167)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:46.217087+0000 osd.2 (osd.2) 166 : cluster [DBG] 7.1c scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:46.227634+0000 osd.2 (osd.2) 167 : cluster [DBG] 7.1c scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1097728 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:19.110395+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:48.153238+0000 osd.2 (osd.2) 168 : cluster [DBG] 7.11 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:48.163699+0000 osd.2 (osd.2) 169 : cluster [DBG] 7.11 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 169)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:48.153238+0000 osd.2 (osd.2) 168 : cluster [DBG] 7.11 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:48.163699+0000 osd.2 (osd.2) 169 : cluster [DBG] 7.11 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898998 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 1089536 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:20.110611+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:49.156240+0000 osd.2 (osd.2) 170 : cluster [DBG] 6.8 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:49.166857+0000 osd.2 (osd.2) 171 : cluster [DBG] 6.8 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 171)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:49.156240+0000 osd.2 (osd.2) 170 : cluster [DBG] 6.8 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:49.166857+0000 osd.2 (osd.2) 171 : cluster [DBG] 6.8 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1097728 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:21.110799+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 6.f scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 6.f scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 1089536 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:22.110960+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:51.180536+0000 osd.2 (osd.2) 172 : cluster [DBG] 6.f scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:51.201748+0000 osd.2 (osd.2) 173 : cluster [DBG] 6.f scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 173)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:51.180536+0000 osd.2 (osd.2) 172 : cluster [DBG] 6.f scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:51.201748+0000 osd.2 (osd.2) 173 : cluster [DBG] 6.f scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1081344 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:23.111150+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.e scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.793622971s of 11.888650894s, submitted: 12
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.e scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1081344 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:24.111329+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:53.185012+0000 osd.2 (osd.2) 174 : cluster [DBG] 9.e scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:53.223919+0000 osd.2 (osd.2) 175 : cluster [DBG] 9.e scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 175)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:53.185012+0000 osd.2 (osd.2) 174 : cluster [DBG] 9.e scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:53.223919+0000 osd.2 (osd.2) 175 : cluster [DBG] 9.e scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906231 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 1064960 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:25.111894+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:54.218575+0000 osd.2 (osd.2) 176 : cluster [DBG] 9.8 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:54.253909+0000 osd.2 (osd.2) 177 : cluster [DBG] 9.8 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 177)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:54.218575+0000 osd.2 (osd.2) 176 : cluster [DBG] 9.8 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:54.253909+0000 osd.2 (osd.2) 177 : cluster [DBG] 9.8 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1048576 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:26.112086+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1048576 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:27.112292+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 1040384 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:28.112420+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1015808 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:29.112587+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:58.287959+0000 osd.2 (osd.2) 178 : cluster [DBG] 9.17 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:56:58.312696+0000 osd.2 (osd.2) 179 : cluster [DBG] 9.17 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908644 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 179)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:58.287959+0000 osd.2 (osd.2) 178 : cluster [DBG] 9.17 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:56:58.312696+0000 osd.2 (osd.2) 179 : cluster [DBG] 9.17 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1015808 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:30.112747+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1015808 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:31.113144+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1007616 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:32.113272+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1007616 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:33.113529+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 999424 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:34.113689+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908644 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 999424 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:35.113826+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 983040 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:36.113933+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.f scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.192355156s of 13.201869965s, submitted: 6
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.f scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 983040 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:37.114084+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:57:06.386950+0000 osd.2 (osd.2) 180 : cluster [DBG] 9.f scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:57:06.425831+0000 osd.2 (osd.2) 181 : cluster [DBG] 9.f scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 181)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:57:06.386950+0000 osd.2 (osd.2) 180 : cluster [DBG] 9.f scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:57:06.425831+0000 osd.2 (osd.2) 181 : cluster [DBG] 9.f scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.c scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.c scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 966656 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:38.115071+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:57:07.435140+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.c scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:57:07.463416+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.c scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 183)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:57:07.435140+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.c scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:57:07.463416+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.c scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 966656 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:39.115268+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915877 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 950272 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:40.115433+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:57:09.426519+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.7 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:57:09.461774+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.7 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 185)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:57:09.426519+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.7 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:57:09.461774+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.7 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 942080 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:41.115728+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 933888 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:42.115892+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:57:11.488014+0000 osd.2 (osd.2) 186 : cluster [DBG] 9.6 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:57:11.519699+0000 osd.2 (osd.2) 187 : cluster [DBG] 9.6 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 187)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:57:11.488014+0000 osd.2 (osd.2) 186 : cluster [DBG] 9.6 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:57:11.519699+0000 osd.2 (osd.2) 187 : cluster [DBG] 9.6 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 933888 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:43.116120+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:57:12.518833+0000 osd.2 (osd.2) 188 : cluster [DBG] 9.19 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:57:12.561243+0000 osd.2 (osd.2) 189 : cluster [DBG] 9.19 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 189)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:57:12.518833+0000 osd.2 (osd.2) 188 : cluster [DBG] 9.19 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:57:12.561243+0000 osd.2 (osd.2) 189 : cluster [DBG] 9.19 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 933888 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:44.116360+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920701 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 925696 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:45.116535+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 925696 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:46.116696+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 917504 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:47.116835+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 917504 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:48.116962+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 909312 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:49.117127+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920701 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 909312 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:50.117247+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.002473831s of 14.233125687s, submitted: 10
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 884736 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:51.117405+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:57:20.620033+0000 osd.2 (osd.2) 190 : cluster [DBG] 9.18 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:57:20.651792+0000 osd.2 (osd.2) 191 : cluster [DBG] 9.18 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 191)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:57:20.620033+0000 osd.2 (osd.2) 190 : cluster [DBG] 9.18 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:57:20.651792+0000 osd.2 (osd.2) 191 : cluster [DBG] 9.18 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 860160 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:52.117704+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:57:21.657962+0000 osd.2 (osd.2) 192 : cluster [DBG] 9.13 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  will send 2026-02-01T14:57:21.689690+0000 osd.2 (osd.2) 193 : cluster [DBG] 9.13 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client handle_log_ack log(last 193)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:57:21.657962+0000 osd.2 (osd.2) 192 : cluster [DBG] 9.13 scrub starts
Feb 01 15:23:51 compute-0 ceph-osd[88066]: log_client  logged 2026-02-01T14:57:21.689690+0000 osd.2 (osd.2) 193 : cluster [DBG] 9.13 scrub ok
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 860160 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:53.118328+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 860160 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:54.118687+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 851968 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:55.118865+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70377472 unmapped: 835584 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:56.119088+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 827392 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:57.119605+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 827392 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:58.119968+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 819200 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:59.120112+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 819200 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:00.120238+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 819200 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:01.120389+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 811008 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:02.120528+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 811008 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:03.120723+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 802816 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:04.120876+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 811008 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:05.120994+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 811008 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:06.121127+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 802816 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:07.121328+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 802816 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:08.121432+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 794624 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:09.121550+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 794624 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:10.121659+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 794624 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:11.121773+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 786432 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:12.121963+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 786432 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:13.122139+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70434816 unmapped: 778240 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:14.122281+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70434816 unmapped: 778240 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:15.122471+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70434816 unmapped: 778240 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:16.122601+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 770048 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:17.122767+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 770048 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:18.122943+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 770048 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:19.123108+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 761856 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:20.123249+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 761856 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:21.123373+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:22.123505+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 761856 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:23.123661+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70459392 unmapped: 753664 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:24.123819+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70459392 unmapped: 753664 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:25.123939+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70467584 unmapped: 745472 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:26.124064+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70467584 unmapped: 745472 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:27.124219+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70467584 unmapped: 745472 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:28.124351+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 737280 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:29.124482+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 737280 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:30.124612+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 729088 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:31.124759+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 729088 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:32.124917+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 729088 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:33.125095+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70492160 unmapped: 720896 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:34.125246+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70492160 unmapped: 720896 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:35.125358+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70500352 unmapped: 712704 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:36.125514+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70500352 unmapped: 712704 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:37.125642+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70500352 unmapped: 712704 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:38.125898+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70508544 unmapped: 704512 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:39.126045+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70508544 unmapped: 704512 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:40.126241+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70508544 unmapped: 704512 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:41.126393+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 696320 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:42.126628+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 696320 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:43.126809+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70524928 unmapped: 688128 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:44.126965+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70524928 unmapped: 688128 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:45.127103+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70524928 unmapped: 688128 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:46.127204+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70533120 unmapped: 679936 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:47.127348+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70533120 unmapped: 679936 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:48.127478+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70541312 unmapped: 671744 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:49.127665+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70541312 unmapped: 671744 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:50.127818+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70541312 unmapped: 671744 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:51.127965+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70549504 unmapped: 663552 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:52.128087+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70549504 unmapped: 663552 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:53.128268+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70549504 unmapped: 663552 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:54.128411+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70557696 unmapped: 655360 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:55.128545+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 647168 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:56.128922+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70574080 unmapped: 638976 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:57.129236+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70574080 unmapped: 638976 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:58.129351+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70582272 unmapped: 630784 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:59.129475+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70582272 unmapped: 630784 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:00.129629+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 622592 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:01.129808+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 622592 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:02.129993+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 622592 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:03.130149+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 614400 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:04.130281+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 614400 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:05.130471+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 614400 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:06.130612+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 606208 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:07.130705+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 606208 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:08.130856+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 606208 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:09.130989+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70615040 unmapped: 598016 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:10.131140+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70615040 unmapped: 598016 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:11.131336+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70615040 unmapped: 598016 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:12.131463+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 589824 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:13.131620+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 589824 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:14.131747+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 581632 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:15.131884+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 581632 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:16.132017+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 573440 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:17.132173+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 573440 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:18.132284+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 573440 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:19.132434+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 565248 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:20.132555+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 565248 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:21.132709+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 565248 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:22.132824+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 557056 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:23.132968+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 557056 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:24.133109+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 548864 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:25.133255+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 540672 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:26.133426+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 540672 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:27.133526+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 532480 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:28.133612+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 532480 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:29.133766+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 524288 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:30.133901+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 524288 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:31.134000+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 524288 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:32.134137+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 516096 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:33.134345+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 516096 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:34.134486+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 507904 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:35.134615+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 507904 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:36.134762+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 507904 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:37.134986+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 499712 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:38.135151+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 499712 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:39.135346+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 499712 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:40.135490+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 491520 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:41.135640+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70729728 unmapped: 483328 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:42.135798+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 475136 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:43.136020+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 475136 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:44.136222+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 466944 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:45.136378+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 466944 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:46.136540+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 466944 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:47.136674+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 458752 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:48.136829+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 458752 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:49.136974+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 450560 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:50.137110+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 450560 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:51.137284+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 442368 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:52.137496+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 434176 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:53.137742+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 434176 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:54.137862+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 434176 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:55.137994+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 425984 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:56.138142+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 425984 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:57.138332+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 417792 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:58.138471+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 417792 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:59.139022+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 417792 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:00.139140+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 409600 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:01.139279+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 401408 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:02.139413+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 393216 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:03.139556+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 393216 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:04.139697+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 393216 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:05.139822+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 385024 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:06.139974+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 385024 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:07.140106+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 385024 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:08.140253+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 376832 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:09.140452+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 376832 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:10.140702+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 376832 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:11.140853+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 368640 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:12.140986+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 360448 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:13.141464+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 360448 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:14.141613+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 360448 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:15.141798+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 352256 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:16.141979+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 352256 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:17.142106+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 352256 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:18.142268+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 344064 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:19.142416+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 344064 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:20.142539+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 344064 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:21.142698+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 335872 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:22.142806+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 335872 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:23.142972+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 327680 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:24.143100+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 327680 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:25.143261+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 327680 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:26.143425+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 311296 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:27.143526+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 311296 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:28.143636+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 303104 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:29.143761+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 303104 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:30.143880+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 303104 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:31.143989+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 294912 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:32.144114+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 294912 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:33.144356+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 294912 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:34.144516+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 286720 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:35.144654+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 286720 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:36.144773+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 278528 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:37.144930+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 278528 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:38.145040+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:39.145175+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:40.145362+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:41.145643+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:42.145819+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:43.146073+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:44.146220+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:45.146392+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:46.146536+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 270336 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:47.146746+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 262144 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:48.146898+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 237568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:49.147066+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 237568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:50.147218+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 237568 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:51.147349+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 229376 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:52.148205+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 229376 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:53.148498+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 221184 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:54.148797+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 221184 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:55.148914+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 221184 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:56.149064+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 212992 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:57.149283+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 212992 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:58.149452+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 204800 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:59.157357+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 204800 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:00.157540+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 204800 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:01.157668+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 204800 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:02.157817+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 196608 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:03.157969+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 196608 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:04.158081+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 188416 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:05.158225+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 188416 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:06.158347+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 188416 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:07.158439+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 180224 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:08.158585+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 180224 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:09.158689+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 172032 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:10.158877+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 172032 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:11.159002+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 172032 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:12.159145+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 163840 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:13.159360+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 163840 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:14.159546+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 155648 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:15.159651+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 155648 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:16.159790+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 155648 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:17.159943+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 147456 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:18.727045+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 147456 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:19.727171+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 147456 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:20.727279+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 139264 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:21.727412+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 139264 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:22.727580+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 131072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:23.727793+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 131072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:24.727976+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 131072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:25.728117+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 139264 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:26.728268+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 139264 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:27.728407+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 131072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:28.728552+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 131072 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:29.728711+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 122880 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:30.728840+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 106496 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:31.728958+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 106496 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:32.729153+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 98304 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:33.729366+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 98304 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:34.729519+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 98304 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:35.729664+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 90112 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:36.729818+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 90112 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:37.729984+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 81920 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5503 writes, 23K keys, 5503 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5503 writes, 810 syncs, 6.79 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5503 writes, 23K keys, 5503 commit groups, 1.0 writes per commit group, ingest: 18.44 MB, 0.03 MB/s
                                           Interval WAL: 5503 writes, 810 syncs, 6.79 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:38.730132+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 24576 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:39.730257+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 24576 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:40.730428+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16384 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:41.730548+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16384 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:42.730703+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16384 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:43.730923+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 8192 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:44.731068+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 8192 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:45.731206+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 0 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:46.731338+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 0 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:47.731595+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1040384 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:48.731737+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1040384 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:49.731855+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1040384 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:50.731951+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 1032192 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:51.732115+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 1032192 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:52.732217+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 1024000 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:53.732437+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 1024000 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:54.732577+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 1024000 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:55.732705+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 1015808 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:56.732846+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 1015808 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:57.733001+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1007616 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:58.733112+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1007616 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:59.733226+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1007616 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:00.733354+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 999424 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:01.733464+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 999424 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:02.733573+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 999424 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:03.733705+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 991232 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:04.733841+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 991232 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:05.733942+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 983040 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:06.734035+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 983040 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:07.734162+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 983040 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:08.734341+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 974848 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:09.734464+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 974848 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:10.734580+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 966656 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:11.734691+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 966656 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:12.734866+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 966656 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:13.735021+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 958464 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:14.735147+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 958464 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:15.735282+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 950272 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:16.735423+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 950272 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:17.735535+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 950272 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:18.735669+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 942080 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:19.735789+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 942080 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:20.735905+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 933888 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:21.736017+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 933888 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:22.736128+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 966656 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:23.736348+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 966656 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:24.736502+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 966656 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:25.736656+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 958464 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:26.736811+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 276.334167480s of 276.342437744s, submitted: 4
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 925696 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:27.736980+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 827392 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:28.737125+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 1581056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:29.737276+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 1581056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:30.737410+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 1581056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:31.737552+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 1581056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:32.737683+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 1581056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:33.737846+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 1581056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:34.737981+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 1572864 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:35.738126+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 1572864 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:36.738258+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 1564672 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:37.738434+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 1564672 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:38.738550+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 1564672 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:39.738661+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 1556480 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:40.738798+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 1556480 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:41.738938+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 1548288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:42.739081+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 1548288 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:43.739259+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 1540096 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:44.739502+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 1540096 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:45.739708+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 1531904 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:46.739834+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 1531904 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:47.739969+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 1531904 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:48.740124+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 1523712 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:49.748289+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 1515520 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:50.748456+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 1515520 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:51.748701+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 1507328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:52.748818+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 1507328 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:53.748961+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 1499136 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:54.749157+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 1499136 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:55.749761+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 1499136 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:56.749942+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 1490944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:57.750127+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 1490944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:58.750374+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 1482752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:59.750594+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71843840 unmapped: 1466368 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:00.750786+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 1449984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:01.750927+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 1449984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:02.751069+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 1449984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:03.751332+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 1441792 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:04.751508+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 1441792 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:05.751693+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 1433600 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:06.751911+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 1433600 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:07.752104+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 1433600 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:08.752257+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 1425408 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:09.752382+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 1425408 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:10.752498+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 1417216 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:11.752622+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 1417216 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:12.752731+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 1417216 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:13.752861+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 1409024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:14.752964+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 1409024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:15.753064+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 1409024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:16.753166+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 1400832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:17.753260+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 1400832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:18.753345+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 1392640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:19.753481+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 1392640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:20.753612+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 1392640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:21.753762+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 1384448 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:22.753881+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 1384448 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:23.754069+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 1376256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:24.754217+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 1376256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:25.754384+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 1376256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:26.754535+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71942144 unmapped: 1368064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:27.754690+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71942144 unmapped: 1368064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:28.754840+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 1359872 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:29.754984+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 1359872 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:30.755153+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 1351680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:31.755276+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 1351680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:32.755370+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 1343488 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:33.755509+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 1343488 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:34.755615+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 1343488 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:35.755777+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 1335296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:36.755910+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 1335296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:37.756024+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1327104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:38.756133+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1327104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:39.756236+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1327104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:40.756393+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1327104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:41.756551+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1327104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:42.756668+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1327104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:43.756808+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1327104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:44.756908+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1327104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:45.757019+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1327104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:46.757155+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 1318912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:47.757278+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 1318912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:48.757414+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 1302528 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:49.757582+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 1302528 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:50.757693+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 1302528 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:51.757814+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 1302528 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:52.757934+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 1302528 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:53.758133+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 1302528 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:54.758333+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 1294336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:55.758449+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 1294336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:56.758599+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 1294336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:57.758717+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 1294336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:58.758859+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 1294336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:59.758965+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:00.759123+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:01.759256+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:02.759692+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:03.759857+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:04.760019+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:05.760157+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:06.760331+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:07.760490+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:08.760591+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:09.760733+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:10.760857+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:11.760974+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:12.761089+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:13.761265+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:14.761349+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:15.761492+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:16.761649+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:17.761789+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:18.761909+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:19.762041+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:20.762178+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:21.762313+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:22.762433+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:23.762643+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:24.762792+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:25.762954+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:26.763068+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:27.763249+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:28.763380+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:29.763492+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:30.763648+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:31.763768+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:32.763866+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:33.764007+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:34.764126+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:35.764316+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:36.764961+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:37.765134+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:38.765285+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:39.765422+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:40.765539+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:41.765652+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:42.765762+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:43.765951+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:44.766123+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:45.766375+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:46.766572+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:47.766772+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:48.766920+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:49.767072+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:50.767215+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:51.767334+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:52.767463+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:53.767632+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:54.767743+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:55.767869+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:56.767981+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:57.768112+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:58.768237+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 1253376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:59.768349+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 1253376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:00.768494+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 1253376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:01.768643+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 1253376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:02.768768+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 1253376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:03.768913+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:04.769087+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:05.769259+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:06.769414+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:07.769524+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:08.769651+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:09.769791+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:10.769911+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:11.770020+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:12.770133+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:13.770286+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:14.770472+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:15.770600+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:16.770723+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:17.770891+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:18.771020+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:19.771153+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:20.771263+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:21.771365+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:22.771473+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:23.771628+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:24.771741+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:25.771842+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:26.771973+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:27.772079+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 1236992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:28.772180+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 1228800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:29.772328+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 1220608 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:30.772451+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 1220608 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:31.772607+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 1220608 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:32.772818+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 1220608 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:33.772965+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 1220608 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:34.773083+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 1220608 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:35.773175+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:36.773289+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 1220608 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:37.773420+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 1220608 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:38.773526+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 1220608 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:39.773628+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:40.773738+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:41.774011+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:42.774112+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:43.774883+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:44.775010+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:45.775131+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 1196032 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:46.775270+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 1196032 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:47.775393+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 1196032 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:48.775527+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:49.775640+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:50.775780+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:51.775917+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:52.776022+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:53.776145+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:54.776258+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:55.776381+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:56.776487+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:57.776606+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:58.776756+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:59.776879+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:00.776984+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:01.777135+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:02.777288+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:03.777480+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:04.777589+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:05.777705+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:06.777879+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:07.778037+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:08.778144+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:09.778243+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:10.778349+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:11.778445+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:12.778581+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:13.778785+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:14.778922+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:15.779037+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:16.779134+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:17.779261+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:18.779371+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:19.779579+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:20.779693+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:21.779897+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:22.780105+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:23.780335+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:24.780542+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:25.780647+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:26.780768+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:27.780869+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:28.780986+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:29.781088+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:30.781186+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:31.781397+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:32.781512+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:33.781743+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:34.781876+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:35.782091+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:36.782266+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:37.782499+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:38.782650+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:39.782798+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:40.783005+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:41.783136+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:42.783398+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:43.783619+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:44.783744+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc ms_handle_reset ms_handle_reset con 0x560d7ff3a000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3695062931
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: get_auth_request con 0x560d82092400 auth_method 0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_configure stats_period=5
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:45.783847+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:46.783978+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:47.784101+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:48.784346+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:49.784465+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:50.784565+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:51.784671+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:52.784813+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:53.785026+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:54.785202+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 1236992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:55.785363+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 1236992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:56.785545+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 1236992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:57.785705+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 1236992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:58.785817+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 1236992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:59.785969+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 1236992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:00.786091+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 1236992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:01.786266+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 1236992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:02.786407+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 1236992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:03.786599+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 1236992 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:04.786749+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 1228800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:05.786879+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 1228800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:06.787060+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 1228800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:07.787214+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:08.787408+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:09.787585+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:10.787768+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:11.787966+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:12.788184+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:13.788410+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:14.788638+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:15.788828+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:16.788989+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:17.789117+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:18.789277+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:19.789459+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:20.789588+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:21.789741+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:22.789926+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:23.790125+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:24.790267+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:25.790395+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:26.790568+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925527 data_alloc: 218103808 data_used: 5159
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 300.010803223s of 300.151306152s, submitted: 90
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: handle_auth_request added challenge on 0x560d7f8a8c00
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:27.790712+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 1196032 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:28.790880+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 1196032 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:29.791007+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 1196032 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:30.791129+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:31.791338+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:32.791500+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:33.791659+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:34.791798+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:35.791953+0000)
Feb 01 15:23:51 compute-0 ceph-mon[75179]: from='client.14546 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:51 compute-0 ceph-mon[75179]: from='client.14548 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1874586709' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Feb 01 15:23:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1002663905' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Feb 01 15:23:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3867469104' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 01 15:23:51 compute-0 ceph-mon[75179]: from='client.? 192.168.122.10:0/3867469104' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 01 15:23:51 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.eusbkm", "name": "rgw_frontends"} : dispatch
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:36.792115+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:37.792270+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:38.792488+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:39.792687+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:40.792841+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:41.792969+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:42.793197+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:43.793446+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:44.793649+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:45.793870+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:46.794096+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:47.794504+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:48.794829+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:49.795042+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:50.795210+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:51.795380+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:52.795544+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:53.795790+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:54.796050+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:55.796380+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:56.796626+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:57.796858+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:58.797056+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:59.797364+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:00.797558+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:01.797859+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:02.798048+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:03.798280+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:04.798521+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:05.798720+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:06.798930+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:07.799163+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:08.799349+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:09.799519+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:10.799654+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:11.799797+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:12.799951+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:13.800152+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:14.800381+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:15.800563+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:16.800758+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:17.800931+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:18.801122+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:19.801395+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:20.801575+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:21.801721+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:22.801845+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:23.802006+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:24.802152+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:25.802276+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:26.802516+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:27.802706+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:28.802876+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:29.803079+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:30.803255+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:31.803421+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:32.803577+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:33.803777+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:34.803973+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 1146880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:35.804193+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 1146880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:36.804352+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 1146880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:37.804534+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 1146880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:38.804661+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 1146880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:39.804850+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 1146880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:40.805065+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 1146880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:41.805230+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 1146880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:42.805370+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 1138688 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:43.805567+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 1138688 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:44.805708+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 1138688 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:45.805846+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 1138688 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:46.805967+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 1138688 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:47.806119+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 1138688 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:48.806259+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 1130496 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:49.806391+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:50.806559+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:51.806673+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:52.806961+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:53.807774+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:54.807906+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:55.808062+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:56.808213+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:57.808376+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:58.808507+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:59.808648+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:00.808775+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:01.808908+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:02.809062+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:03.809243+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:04.809378+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:05.809543+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:06.809715+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:07.809852+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:08.809999+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:09.810160+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1105920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:10.810374+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1105920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:11.810513+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1105920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:12.810621+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1105920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:13.810780+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1105920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:14.810981+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1105920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:15.811151+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1105920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:16.811270+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1105920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:17.811396+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1105920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:18.811546+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:19.811735+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:20.811932+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:21.812097+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:22.812277+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:23.812562+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:24.812842+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:25.812979+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:26.813108+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:27.813228+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:28.813351+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:29.813537+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:30.813684+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:31.813914+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:32.814094+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:33.814271+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:34.814430+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:35.814608+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:36.814805+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:37.814970+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:38.815093+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:39.815218+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:40.815350+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:41.815467+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:42.815610+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:43.815877+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:44.818144+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:45.818407+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:46.818559+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:47.818690+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:48.818840+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:49.818966+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:50.819060+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:51.819177+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:52.819360+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:53.819503+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:54.819607+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:55.819692+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:56.819758+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:57.819899+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:58.820055+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:59.820165+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:00.820257+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:01.820410+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:02.820657+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:03.820857+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:04.820959+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:05.821077+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:06.821225+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:07.821405+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:08.821524+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:09.821638+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:10.821747+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:11.821907+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:12.822081+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:13.822362+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:14.822514+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:15.822643+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:16.822791+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:17.822942+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:18.823036+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:19.823170+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:20.823281+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:21.823373+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:22.823482+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:23.823629+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:24.823729+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:25.823878+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:26.823968+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:27.824076+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:28.824179+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:29.824269+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:30.824378+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:31.824522+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:32.824684+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:33.824872+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:34.825027+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:35.825189+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:36.825383+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:37.825519+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:38.825664+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:39.825774+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1064960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:40.825918+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1064960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:41.826027+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1064960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:42.826150+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1064960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:43.826438+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1064960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:44.826594+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 1056768 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:45.826801+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 1056768 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:46.827043+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 1056768 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:47.827169+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 1056768 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:48.827283+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 1040384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:49.827404+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 1040384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:50.828481+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:51.830709+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:52.831993+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:53.832649+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:54.833800+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:55.834164+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:56.834741+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:57.834978+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:58.835832+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:59.836441+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:00.836926+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:01.837276+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:02.837521+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:03.837762+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:04.838249+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1015808 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:05.838533+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1015808 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:06.838657+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1015808 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:07.839046+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1015808 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:08.839411+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1015808 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:09.839951+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1015808 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:10.840217+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 999424 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:11.840493+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 999424 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:12.840723+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 999424 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:13.841044+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 999424 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:14.841279+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:15.841471+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:16.841644+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:17.841869+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:18.842041+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:19.842216+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:20.842482+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:21.842666+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:22.842984+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:23.843237+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:24.843425+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:25.843587+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:26.843747+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:27.844005+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:28.844221+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:29.844456+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:30.844604+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:31.844755+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:32.844918+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:33.845070+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:34.845218+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:35.845422+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:36.845632+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:37.845811+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5731 writes, 24K keys, 5731 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5731 writes, 924 syncs, 6.20 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70fa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560d7e70f8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:38.845988+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 950272 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:39.846130+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 950272 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:40.846256+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 950272 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:41.846396+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 950272 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:42.846591+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 950272 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:43.846795+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 950272 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:44.846954+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 950272 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:45.847105+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 950272 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:46.847395+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 950272 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:47.847614+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 942080 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:48.847779+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 933888 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:49.847923+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 933888 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:50.848076+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 933888 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:51.848228+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 933888 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:52.848436+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 933888 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:53.848597+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 933888 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:54.848773+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:55.848919+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:56.849092+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:57.849252+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:58.849402+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:59.849595+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:00.849787+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:01.849942+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:02.850218+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:03.850417+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:04.850599+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:05.850818+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:06.851028+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:07.851343+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:08.851626+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:09.851877+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:10.852013+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:11.852205+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:12.852515+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:13.852715+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:14.852863+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:15.853031+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:16.853168+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:17.853278+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:18.853430+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:19.853605+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:20.853780+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:21.853940+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:22.854122+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:23.854360+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:24.854506+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:25.854670+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 925696 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:26.854851+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.904083252s of 299.935333252s, submitted: 24
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 901120 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:27.855042+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 753664 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:28.855233+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:29.855423+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:30.855534+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:31.856499+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:32.857435+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:33.858029+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:34.858687+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:35.859869+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:36.860439+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:37.860795+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:38.861196+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:39.861563+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:40.861815+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:41.862443+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:42.862778+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:43.863041+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:44.863269+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:45.863410+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:46.863554+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:47.863862+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:48.864064+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 417792 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:49.864364+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 417792 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:50.864526+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:51.864703+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:52.864894+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:53.865103+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:54.865375+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:55.865547+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:56.865722+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:57.865891+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:58.866021+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:59.866145+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:00.866260+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:01.866404+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:02.866561+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:03.866797+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:04.866968+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:05.867102+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:06.867273+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:07.867408+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927063 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:08.867669+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:09.867793+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:10.867921+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: handle_auth_request added challenge on 0x560d80adf400
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7e64/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 344064 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:11.868037+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 120 handle_osd_map epochs [121,122], i have 120, src has [1,122]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 45.135498047s of 45.419361115s, submitted: 90
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 221184 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:12.868477+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980548 data_alloc: 218103808 data_used: 6997
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 123 ms_handle_reset con 0x560d80adf400 session 0x560d7ff9e540
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 16801792 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:13.868737+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: handle_auth_request added challenge on 0x560d82092800
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 24027136 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:14.868878+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbeb0000/0x0/0x4ffc00000, data 0x10bd1eb/0x117c000, compress 0x0/0x0/0x0, omap 0x10eb5, meta 0x2bbf14b), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _renew_subs
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fbeb0000/0x0/0x4ffc00000, data 0x10bd1eb/0x117c000, compress 0x0/0x0/0x0, omap 0x10eb5, meta 0x2bbf14b), peers [0,1] op hist [1])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 124 ms_handle_reset con 0x560d82092800 session 0x560d82910380
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 24002560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:15.869036+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 24002560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:16.869287+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 24002560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:17.869536+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1032047 data_alloc: 218103808 data_used: 8195
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 24002560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:18.869723+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fbeaa000/0x0/0x4ffc00000, data 0x10bedc6/0x1180000, compress 0x0/0x0/0x0, omap 0x11261, meta 0x2bbed9f), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 24002560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:19.869896+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 24002560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:20.870121+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 24002560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:21.870325+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 24002560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:22.870485+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034533 data_alloc: 218103808 data_used: 8195
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 24002560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:23.870704+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:24.871004+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c0845/0x1183000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:25.871153+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:26.871368+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:27.871481+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034533 data_alloc: 218103808 data_used: 8195
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:28.871595+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c0845/0x1183000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:29.871742+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c0845/0x1183000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c0845/0x1183000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:30.871909+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:31.872086+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:32.872291+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034533 data_alloc: 218103808 data_used: 8195
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:33.872486+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c0845/0x1183000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:34.872630+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:35.872791+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 23994368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:36.872966+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea7000/0x0/0x4ffc00000, data 0x10c0845/0x1183000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: handle_auth_request added challenge on 0x560d82a70000
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.540817261s of 24.730327606s, submitted: 57
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 23855104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:37.873118+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea6000/0x0/0x4ffc00000, data 0x10c08e0/0x1184000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037197 data_alloc: 218103808 data_used: 8195
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 23805952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:38.873251+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_map Got map version 10
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 23740416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:39.873447+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 23740416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:40.873576+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 23740416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:41.873706+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 23740416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:42.873849+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea6000/0x0/0x4ffc00000, data 0x10c0a16/0x1186000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038745 data_alloc: 218103808 data_used: 8195
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea6000/0x0/0x4ffc00000, data 0x10c0a16/0x1186000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 23740416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:43.873979+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea6000/0x0/0x4ffc00000, data 0x10c0a16/0x1186000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 23740416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:44.874147+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 23740416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:45.874339+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 23740416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:46.874518+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_map Got map version 11
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 23658496 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:47.874692+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea6000/0x0/0x4ffc00000, data 0x10c0a16/0x1186000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: handle_auth_request added challenge on 0x560d820b7c00
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.054696083s of 11.060445786s, submitted: 3
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1036335 data_alloc: 218103808 data_used: 8195
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 23519232 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:48.874822+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 23511040 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:49.874996+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 23511040 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:50.875176+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 23511040 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:51.875349+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 23511040 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:52.875529+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035361 data_alloc: 218103808 data_used: 8195
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 23511040 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:53.876167+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbea8000/0x0/0x4ffc00000, data 0x10c08e0/0x1184000, compress 0x0/0x0/0x0, omap 0x11539, meta 0x2bbeac7), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 23511040 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:54.876345+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:55.876471+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:56.876624+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:57.876832+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fbea3000/0x0/0x4ffc00000, data 0x10c24e5/0x1187000, compress 0x0/0x0/0x0, omap 0x117c4, meta 0x2bbe83c), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.055249214s of 10.100062370s, submitted: 26
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1040547 data_alloc: 218103808 data_used: 8195
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:58.877041+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:59.877222+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:00.877344+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:01.877507+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fbea2000/0x0/0x4ffc00000, data 0x10c2580/0x1188000, compress 0x0/0x0/0x0, omap 0x117c4, meta 0x2bbe83c), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:02.877638+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043177 data_alloc: 218103808 data_used: 8195
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:03.877808+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:04.877903+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea0000/0x0/0x4ffc00000, data 0x10c3f64/0x118a000, compress 0x0/0x0/0x0, omap 0x11a9d, meta 0x2bbe563), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea0000/0x0/0x4ffc00000, data 0x10c3f64/0x118a000, compress 0x0/0x0/0x0, omap 0x11a9d, meta 0x2bbe563), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:05.878060+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:06.878202+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:07.878371+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042587 data_alloc: 218103808 data_used: 8195
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 23502848 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.503337860s of 10.511715889s, submitted: 14
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:08.878438+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea0000/0x0/0x4ffc00000, data 0x10c3f64/0x118a000, compress 0x0/0x0/0x0, omap 0x11a9d, meta 0x2bbe563), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea0000/0x0/0x4ffc00000, data 0x10c3f64/0x118a000, compress 0x0/0x0/0x0, omap 0x11a9d, meta 0x2bbe563), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 23494656 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:09.878604+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 23494656 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:10.878788+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 23486464 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:11.878895+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 23486464 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:12.879042+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea1000/0x0/0x4ffc00000, data 0x10c3fff/0x118b000, compress 0x0/0x0/0x0, omap 0x11a9d, meta 0x2bbe563), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043559 data_alloc: 218103808 data_used: 8195
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 23486464 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:13.879222+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbea1000/0x0/0x4ffc00000, data 0x10c3fff/0x118b000, compress 0x0/0x0/0x0, omap 0x11a9d, meta 0x2bbe563), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 23486464 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:14.879384+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 23478272 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:15.879534+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 23478272 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:16.879663+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 23478272 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:17.879800+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042825 data_alloc: 218103808 data_used: 8195
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 23478272 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:18.879929+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.553779602s of 10.561478615s, submitted: 3
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 23461888 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:19.880073+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fbe9d000/0x0/0x4ffc00000, data 0x10c5b69/0x118d000, compress 0x0/0x0/0x0, omap 0x11d28, meta 0x2bbe2d8), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 23461888 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:20.880178+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:21.880360+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 23461888 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fbe9d000/0x0/0x4ffc00000, data 0x10c5b69/0x118d000, compress 0x0/0x0/0x0, omap 0x11d28, meta 0x2bbe2d8), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:22.880509+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 23461888 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046319 data_alloc: 218103808 data_used: 8195
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:23.880745+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 23461888 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:24.880900+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 23461888 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:25.881072+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 23461888 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _renew_subs
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:26.881225+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 23445504 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fbe9b000/0x0/0x4ffc00000, data 0x10c754d/0x118f000, compress 0x0/0x0/0x0, omap 0x12001, meta 0x2bbdfff), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:27.881400+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 23445504 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051741 data_alloc: 218103808 data_used: 8195
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:28.881544+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 23445504 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:29.881672+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 23437312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:30.881805+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 23437312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:31.881965+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 23437312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:32.882115+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 23437312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fbe96000/0x0/0x4ffc00000, data 0x10c9182/0x1192000, compress 0x0/0x0/0x0, omap 0x1228c, meta 0x2bbdd74), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051741 data_alloc: 218103808 data_used: 8195
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:33.882375+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 23437312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.750567436s of 14.883956909s, submitted: 62
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:34.882599+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 23429120 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:35.882749+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 23429120 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:36.882881+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 23420928 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fbe92000/0x0/0x4ffc00000, data 0x10cadf2/0x1198000, compress 0x0/0x0/0x0, omap 0x125e4, meta 0x2bbda1c), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:37.883063+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 23412736 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059973 data_alloc: 218103808 data_used: 8195
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:38.883237+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 23412736 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _renew_subs
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fbe90000/0x0/0x4ffc00000, data 0x10cae20/0x1198000, compress 0x0/0x0/0x0, omap 0x125e4, meta 0x2bbda1c), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:39.883431+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 23658496 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:40.883639+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 23658496 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:41.883816+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 23658496 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fbe94000/0x0/0x4ffc00000, data 0x10cc826/0x1198000, compress 0x0/0x0/0x0, omap 0x1286f, meta 0x2bbd791), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:42.884005+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 23658496 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061405 data_alloc: 218103808 data_used: 8195
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:43.884223+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 22609920 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:44.884379+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 22609920 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 132 handle_osd_map epochs [133,134], i have 132, src has [1,134]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _renew_subs
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.087007523s of 11.162994385s, submitted: 50
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 134 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:45.884566+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 21553152 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _renew_subs
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:46.884740+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 21536768 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:47.884988+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 21504000 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fbe85000/0x0/0x4ffc00000, data 0x10d1c51/0x11a3000, compress 0x0/0x0/0x0, omap 0x12dd8, meta 0x2bbd228), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1070609 data_alloc: 218103808 data_used: 8195
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:48.885158+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 21504000 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:49.885401+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 21504000 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:50.885580+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 21479424 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:51.885711+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 21479424 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe80000/0x0/0x4ffc00000, data 0x10d53cc/0x11aa000, compress 0x0/0x0/0x0, omap 0x1333c, meta 0x2bbccc4), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:52.885823+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 21479424 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076731 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:53.885994+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 21479424 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:54.886176+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 21479424 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:55.886373+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 21479424 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:56.886552+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 21479424 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:57.886726+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 21479424 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 137 handle_osd_map epochs [138,139], i have 137, src has [1,139]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.667154312s of 12.851060867s, submitted: 105
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080763 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fbe81000/0x0/0x4ffc00000, data 0x10d51fb/0x11a7000, compress 0x0/0x0/0x0, omap 0x1333c, meta 0x2bbccc4), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:58.886916+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 21405696 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:59.887055+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 21372928 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:00.887253+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fbe7d000/0x0/0x4ffc00000, data 0x10d891b/0x11ad000, compress 0x0/0x0/0x0, omap 0x13699, meta 0x2bbc967), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 21372928 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:01.887405+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 21372928 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:02.887584+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083105 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:03.887974+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:04.888146+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10da3e6/0x11b0000, compress 0x0/0x0/0x0, omap 0x13a2c, meta 0x2bbc5d4), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:05.888375+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:06.888570+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:07.888740+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085879 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:08.888882+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:09.889010+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:10.889158+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fbe77000/0x0/0x4ffc00000, data 0x10dbe81/0x11b3000, compress 0x0/0x0/0x0, omap 0x13d2f, meta 0x2bbc2d1), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:11.889331+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:12.889489+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085879 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:13.889630+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:14.889779+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:15.889976+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.290771484s of 18.353887558s, submitted: 63
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:16.890124+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe77000/0x0/0x4ffc00000, data 0x10dbe81/0x11b3000, compress 0x0/0x0/0x0, omap 0x13d2f, meta 0x2bbc2d1), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:17.890279+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088653 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe74000/0x0/0x4ffc00000, data 0x10dd900/0x11b6000, compress 0x0/0x0/0x0, omap 0x14042, meta 0x2bbbfbe), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:19.799588+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:20.799704+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:21.799799+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 20283392 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe73000/0x0/0x4ffc00000, data 0x10dd99b/0x11b7000, compress 0x0/0x0/0x0, omap 0x14042, meta 0x2bbbfbe), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:22.799939+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79290368 unmapped: 20250624 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090345 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:23.800035+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 20234240 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe73000/0x0/0x4ffc00000, data 0x10dd99b/0x11b7000, compress 0x0/0x0/0x0, omap 0x14042, meta 0x2bbbfbe), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:24.800227+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe73000/0x0/0x4ffc00000, data 0x10dd99b/0x11b7000, compress 0x0/0x0/0x0, omap 0x14042, meta 0x2bbbfbe), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:25.800427+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:26.800563+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.632844925s of 10.640859604s, submitted: 13
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:27.800763+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087933 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:28.800959+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe76000/0x0/0x4ffc00000, data 0x10dd900/0x11b6000, compress 0x0/0x0/0x0, omap 0x14042, meta 0x2bbbfbe), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:29.801122+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbe76000/0x0/0x4ffc00000, data 0x10dd900/0x11b6000, compress 0x0/0x0/0x0, omap 0x14042, meta 0x2bbbfbe), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:30.801276+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:31.801505+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:32.801643+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091427 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:33.801773+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:34.801918+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbe71000/0x0/0x4ffc00000, data 0x10df505/0x11b9000, compress 0x0/0x0/0x0, omap 0x142cd, meta 0x2bbbd33), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:35.802084+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:36.802226+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:37.802354+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094201 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:38.802487+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:39.802635+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbe6e000/0x0/0x4ffc00000, data 0x10e0f84/0x11bc000, compress 0x0/0x0/0x0, omap 0x145e0, meta 0x2bbba20), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:40.802771+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:41.802936+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbe6e000/0x0/0x4ffc00000, data 0x10e0f84/0x11bc000, compress 0x0/0x0/0x0, omap 0x145e0, meta 0x2bbba20), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:42.803652+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.540208817s of 15.619994164s, submitted: 64
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095893 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:43.803800+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 20226048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:44.804072+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 20217856 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:45.804206+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbe6d000/0x0/0x4ffc00000, data 0x10e101f/0x11bd000, compress 0x0/0x0/0x0, omap 0x145e0, meta 0x2bbba20), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 20217856 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:46.804858+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 20217856 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:47.805045+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79339520 unmapped: 20201472 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:48.805442+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098413 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:49.805585+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbe6d000/0x0/0x4ffc00000, data 0x10e10ba/0x11be000, compress 0x0/0x0/0x0, omap 0x145e0, meta 0x2bbba20), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:50.806260+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbe6d000/0x0/0x4ffc00000, data 0x10e10ba/0x11be000, compress 0x0/0x0/0x0, omap 0x145e0, meta 0x2bbba20), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:51.806461+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:52.806702+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.385616302s of 10.393396378s, submitted: 4
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:53.806873+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096977 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:54.807044+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbe6d000/0x0/0x4ffc00000, data 0x10e101f/0x11bd000, compress 0x0/0x0/0x0, omap 0x145e0, meta 0x2bbba20), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 144 handle_osd_map epochs [145,145], i have 145, src has [1,145]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:55.807167+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:56.807375+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:57.807541+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:58.807674+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097933 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fbe6b000/0x0/0x4ffc00000, data 0x10e2b89/0x11bf000, compress 0x0/0x0/0x0, omap 0x1486b, meta 0x2bbb795), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:59.807804+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:00.807925+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:01.808064+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:02.808274+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fbe6b000/0x0/0x4ffc00000, data 0x10e2b89/0x11bf000, compress 0x0/0x0/0x0, omap 0x1486b, meta 0x2bbb795), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:03.808486+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097933 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:04.808644+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:05.808786+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.764005661s of 12.822526932s, submitted: 26
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe6b000/0x0/0x4ffc00000, data 0x10e2b89/0x11bf000, compress 0x0/0x0/0x0, omap 0x1486b, meta 0x2bbb795), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:06.808948+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe68000/0x0/0x4ffc00000, data 0x10e4608/0x11c2000, compress 0x0/0x0/0x0, omap 0x14b7e, meta 0x2bbb482), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:07.809133+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:08.809366+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100707 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe68000/0x0/0x4ffc00000, data 0x10e4608/0x11c2000, compress 0x0/0x0/0x0, omap 0x14b7e, meta 0x2bbb482), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:09.809485+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:10.809681+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:11.809917+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe68000/0x0/0x4ffc00000, data 0x10e4608/0x11c2000, compress 0x0/0x0/0x0, omap 0x14b7e, meta 0x2bbb482), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:12.810147+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe68000/0x0/0x4ffc00000, data 0x10e4608/0x11c2000, compress 0x0/0x0/0x0, omap 0x14b7e, meta 0x2bbb482), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:13.810270+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100707 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe68000/0x0/0x4ffc00000, data 0x10e4608/0x11c2000, compress 0x0/0x0/0x0, omap 0x14b7e, meta 0x2bbb482), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:14.810466+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 20193280 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:15.810622+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.986886024s of 10.001235008s, submitted: 13
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 20160512 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:16.810772+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 20160512 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:17.810886+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79396864 unmapped: 20144128 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:18.811045+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103371 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79396864 unmapped: 20144128 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:19.811183+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe67000/0x0/0x4ffc00000, data 0x10e47d9/0x11c5000, compress 0x0/0x0/0x0, omap 0x14b7e, meta 0x2bbb482), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79396864 unmapped: 20144128 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:20.811392+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 20111360 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:21.811565+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 20111360 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:22.811731+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe66000/0x0/0x4ffc00000, data 0x10e484e/0x11c6000, compress 0x0/0x0/0x0, omap 0x14b7e, meta 0x2bbb482), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 20111360 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:23.811862+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108015 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 79429632 unmapped: 20111360 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:24.812029+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe65000/0x0/0x4ffc00000, data 0x10e48c2/0x11c7000, compress 0x0/0x0/0x0, omap 0x14b7e, meta 0x2bbb482), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 19021824 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:25.812184+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: handle_auth_request added challenge on 0x560d7ff3b400
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.170200348s of 10.202037811s, submitted: 10
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 18857984 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:26.812323+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 18825216 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_map Got map version 12
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:27.812468+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 18882560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:28.812613+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110241 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80666624 unmapped: 18874368 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe65000/0x0/0x4ffc00000, data 0x10e49bd/0x11c7000, compress 0x0/0x0/0x0, omap 0x14ce9, meta 0x2bbb317), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_map Got map version 13
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:29.812771+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe65000/0x0/0x4ffc00000, data 0x10e49bd/0x11c7000, compress 0x0/0x0/0x0, omap 0x14ce9, meta 0x2bbb317), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 18759680 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:30.812986+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 18751488 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe64000/0x0/0x4ffc00000, data 0x10e4837/0x11c6000, compress 0x0/0x0/0x0, omap 0x14ce9, meta 0x2bbb317), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:31.813153+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 18751488 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:32.813380+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 18751488 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:33.813503+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110929 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 18751488 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:34.813672+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 18751488 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe66000/0x0/0x4ffc00000, data 0x10e4835/0x11c6000, compress 0x0/0x0/0x0, omap 0x14ce9, meta 0x2bbb317), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:35.813809+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.980128288s of 10.007835388s, submitted: 15
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 18751488 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:36.813937+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 18751488 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:37.814101+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 18759680 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe66000/0x0/0x4ffc00000, data 0x10e4809/0x11c6000, compress 0x0/0x0/0x0, omap 0x14ce9, meta 0x2bbb317), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:38.814280+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110753 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 18735104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:39.814440+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 18735104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:40.814594+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 18735104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:41.814763+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 18735104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:42.814899+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fbe65000/0x0/0x4ffc00000, data 0x10e4809/0x11c6000, compress 0x0/0x0/0x0, omap 0x14ce9, meta 0x2bbb317), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 18735104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:43.815046+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111583 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 18735104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:44.815246+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 18735104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:45.815370+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 18735104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fbe62000/0x0/0x4ffc00000, data 0x10e6373/0x11c8000, compress 0x0/0x0/0x0, omap 0x15044, meta 0x2bbafbc), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:46.815483+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 18735104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:47.815611+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.957118034s of 12.004303932s, submitted: 30
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 18726912 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:48.815754+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113417 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 18710528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:49.815909+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x10e63a1/0x11c8000, compress 0x0/0x0/0x0, omap 0x15044, meta 0x2bbafbc), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 18710528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:50.816050+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 18710528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:51.816188+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:52.816317+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:53.816440+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116895 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe5f000/0x0/0x4ffc00000, data 0x10e7d57/0x11ca000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:54.816583+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:55.816717+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:56.816875+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:57.817054+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.981225967s of 10.005500793s, submitted: 19
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:58.817243+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115441 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe62000/0x0/0x4ffc00000, data 0x10e7d57/0x11ca000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:59.817370+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe62000/0x0/0x4ffc00000, data 0x10e7d57/0x11ca000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:00.817522+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe62000/0x0/0x4ffc00000, data 0x10e7d57/0x11ca000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:01.817726+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe62000/0x0/0x4ffc00000, data 0x10e7d57/0x11ca000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:02.817899+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:03.818118+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115441 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:04.818381+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:05.818555+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:06.818744+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe64000/0x0/0x4ffc00000, data 0x10e7c8c/0x11c8000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 18685952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:07.818950+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 18653184 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:08.819241+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115969 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 18653184 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:09.819369+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 18653184 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:10.819565+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe62000/0x0/0x4ffc00000, data 0x10e7d55/0x11c9000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 18653184 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.155082703s of 13.166279793s, submitted: 6
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:11.819828+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 18636800 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:12.820013+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x10e7d53/0x11c9000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 18636800 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:13.820157+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115953 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 18636800 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:14.820337+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 18628608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:15.820613+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 18628608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:16.820809+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 18628608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:17.820979+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x10e7c8c/0x11c8000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 18628608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:18.821223+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115235 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 18628608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:19.821337+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 18628608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:20.821452+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 18628608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x10e7c8c/0x11c8000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:21.821565+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 18628608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:22.821663+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 18628608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:23.821802+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115235 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 18628608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:24.823651+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:25.823810+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x10e7c8c/0x11c8000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:26.823994+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x10e7c8c/0x11c8000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:27.824125+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread fragmentation_score=0.000140 took=0.000039s
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:28.824245+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115235 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.768423080s of 17.776714325s, submitted: 3
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x10e7d27/0x11c9000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:29.824520+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:30.824705+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:31.824819+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:32.824969+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:33.825115+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115953 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x10e7d27/0x11c9000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:34.825260+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:35.825439+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x10e7d27/0x11c9000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe62000/0x0/0x4ffc00000, data 0x10e7dc2/0x11ca000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:36.825618+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:37.825862+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:38.826055+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x10e7d27/0x11c9000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115953 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:39.826247+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 18620416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x10e7d27/0x11c9000, compress 0x0/0x0/0x0, omap 0x1533d, meta 0x2bbacc3), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 148 handle_osd_map epochs [149,149], i have 149, src has [1,149]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.382137299s of 11.391574860s, submitted: 3
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:40.826411+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 18604032 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:41.826631+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 18513920 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:42.826803+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 18513920 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:43.826949+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122525 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 17383424 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fbe41000/0x0/0x4ffc00000, data 0x1108ae3/0x11eb000, compress 0x0/0x0/0x0, omap 0x155c8, meta 0x2bbaa38), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:44.827188+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 17113088 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:45.827348+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 16924672 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:46.827519+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 16924672 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fbde2000/0x0/0x4ffc00000, data 0x1167e51/0x124a000, compress 0x0/0x0/0x0, omap 0x155c8, meta 0x2bbaa38), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:47.827769+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 149 ms_handle_reset con 0x560d7ff3b400 session 0x560d81c43c00
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 13844480 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:48.828003+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132665 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_map Got map version 14
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 13787136 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:49.828132+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fabff000/0x0/0x4ffc00000, data 0x11a8eee/0x128c000, compress 0x0/0x0/0x0, omap 0x155c8, meta 0x3d5aa38), peers [0,1] op hist [0,1])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 12722176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:50.828214+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 12247040 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fabc8000/0x0/0x4ffc00000, data 0x11e02bc/0x12c3000, compress 0x0/0x0/0x0, omap 0x155c8, meta 0x3d5aa38), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 149 handle_osd_map epochs [150,150], i have 150, src has [1,150]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.793652534s of 11.003534317s, submitted: 271
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:51.828343+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 12959744 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:52.828478+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 88162304 unmapped: 11378688 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:53.828637+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149699 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 10993664 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:54.828787+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 10821632 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:55.829007+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 10821632 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:56.829174+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fab3a000/0x0/0x4ffc00000, data 0x126dcdd/0x1352000, compress 0x0/0x0/0x0, omap 0x158d9, meta 0x3d5a727), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 10747904 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:57.829374+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 10485760 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:58.829518+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148625 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 89284608 unmapped: 10256384 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:59.829665+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: handle_auth_request added challenge on 0x560d7f8a9800
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 89948160 unmapped: 9592832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:00.829787+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 150 heartbeat osd_stat(store_statfs(0x4faa8c000/0x0/0x4ffc00000, data 0x131b204/0x1400000, compress 0x0/0x0/0x0, omap 0x158d9, meta 0x3d5a727), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_map Got map version 15
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 90005504 unmapped: 9535488 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:01.829930+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.966275215s of 10.171354294s, submitted: 105
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 91488256 unmapped: 8052736 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:02.830133+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 150 heartbeat osd_stat(store_statfs(0x4faa56000/0x0/0x4ffc00000, data 0x134ecc8/0x1434000, compress 0x0/0x0/0x0, omap 0x15a25, meta 0x3d5a5db), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 91250688 unmapped: 8290304 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:03.830417+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154013 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 91250688 unmapped: 8290304 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:04.830696+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 91111424 unmapped: 8429568 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:05.830929+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 91381760 unmapped: 8159232 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:06.831150+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 150 heartbeat osd_stat(store_statfs(0x4faa17000/0x0/0x4ffc00000, data 0x138f9e5/0x1475000, compress 0x0/0x0/0x0, omap 0x15a25, meta 0x3d5a5db), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 91013120 unmapped: 8527872 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:07.831374+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa9f7000/0x0/0x4ffc00000, data 0x13afea0/0x1495000, compress 0x0/0x0/0x0, omap 0x15a25, meta 0x3d5a5db), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 91013120 unmapped: 8527872 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:08.831584+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161625 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 8454144 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:09.831761+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 91013120 unmapped: 8527872 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:10.831895+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92217344 unmapped: 7323648 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:11.832045+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.723700523s of 10.847046852s, submitted: 63
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92282880 unmapped: 7258112 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:12.832204+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92282880 unmapped: 7258112 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:13.832428+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa972000/0x0/0x4ffc00000, data 0x143454d/0x151a000, compress 0x0/0x0/0x0, omap 0x15a25, meta 0x3d5a5db), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165339 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92348416 unmapped: 7192576 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:14.832631+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa962000/0x0/0x4ffc00000, data 0x1445242/0x152a000, compress 0x0/0x0/0x0, omap 0x15a25, meta 0x3d5a5db), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 6979584 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:15.832775+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 6914048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:16.832881+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 6914048 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:17.833037+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 91529216 unmapped: 8011776 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:18.833176+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172933 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 91496448 unmapped: 8044544 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:19.833357+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 6963200 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:20.833513+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8ba000/0x0/0x4ffc00000, data 0x14eb5a7/0x15d2000, compress 0x0/0x0/0x0, omap 0x15a25, meta 0x3d5a5db), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:21.833687+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92790784 unmapped: 6750208 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:22.833850+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92790784 unmapped: 6750208 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.517700195s of 10.702951431s, submitted: 59
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:23.834089+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 6799360 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173367 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:24.834253+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 6799360 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8b9000/0x0/0x4ffc00000, data 0x14eb5aa/0x15d2000, compress 0x0/0x0/0x0, omap 0x15a25, meta 0x3d5a5db), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:25.834349+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 6799360 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:26.834505+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 6791168 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:27.834647+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 6791168 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fa8ba000/0x0/0x4ffc00000, data 0x14eb5a8/0x15d2000, compress 0x0/0x0/0x0, omap 0x15a25, meta 0x3d5a5db), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:28.834814+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 6791168 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173719 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:29.834973+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 6791168 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa8b6000/0x0/0x4ffc00000, data 0x14ed112/0x15d4000, compress 0x0/0x0/0x0, omap 0x15cb0, meta 0x3d5a350), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:30.835116+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 6782976 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa8b6000/0x0/0x4ffc00000, data 0x14ed112/0x15d4000, compress 0x0/0x0/0x0, omap 0x15cb0, meta 0x3d5a350), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:31.835338+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 6782976 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:32.835486+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 6782976 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.976147652s of 10.041531563s, submitted: 38
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:33.835627+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 6782976 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa8b6000/0x0/0x4ffc00000, data 0x14ed078/0x15d3000, compress 0x0/0x0/0x0, omap 0x15cb0, meta 0x3d5a350), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181937 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:34.835843+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92774400 unmapped: 6766592 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:35.836009+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92774400 unmapped: 6766592 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:36.836212+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92782592 unmapped: 6758400 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:37.836395+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92790784 unmapped: 6750208 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:38.836580+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92790784 unmapped: 6750208 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185829 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:39.836726+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 6725632 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa8af000/0x0/0x4ffc00000, data 0x14f06ff/0x15d9000, compress 0x0/0x0/0x0, omap 0x162ef, meta 0x3d59d11), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa8b5000/0x0/0x4ffc00000, data 0x14f05c9/0x15d7000, compress 0x0/0x0/0x0, omap 0x162ef, meta 0x3d59d11), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:40.836980+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 6725632 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa8b5000/0x0/0x4ffc00000, data 0x14f05c9/0x15d7000, compress 0x0/0x0/0x0, omap 0x162ef, meta 0x3d59d11), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:41.837130+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 6725632 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa8b5000/0x0/0x4ffc00000, data 0x14f05c9/0x15d7000, compress 0x0/0x0/0x0, omap 0x162ef, meta 0x3d59d11), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:42.837345+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 6725632 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:43.837513+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 6725632 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183465 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:44.837680+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 6725632 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.959350586s of 12.027016640s, submitted: 47
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:45.837811+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92856320 unmapped: 6684672 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fa8b5000/0x0/0x4ffc00000, data 0x14f05c9/0x15d7000, compress 0x0/0x0/0x0, omap 0x162ef, meta 0x3d59d11), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 154 handle_osd_map epochs [155,155], i have 155, src has [1,155]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:46.837972+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92897280 unmapped: 6643712 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:47.838157+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92897280 unmapped: 6643712 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:48.838406+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92897280 unmapped: 6643712 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190197 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 155 heartbeat osd_stat(store_statfs(0x4fa8ab000/0x0/0x4ffc00000, data 0x14f3cc9/0x15dd000, compress 0x0/0x0/0x0, omap 0x163de, meta 0x3d59c22), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:49.838566+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92897280 unmapped: 6643712 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:50.838740+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 6635520 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:51.838885+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 6635520 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:52.839022+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92667904 unmapped: 6873088 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:53.839212+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92667904 unmapped: 6873088 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190721 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 155 heartbeat osd_stat(store_statfs(0x4fa8ad000/0x0/0x4ffc00000, data 0x14f3d92/0x15de000, compress 0x0/0x0/0x0, omap 0x163de, meta 0x3d59c22), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:54.839422+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 6864896 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: handle_auth_request added challenge on 0x560d82b7e400
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.852005005s of 10.095981598s, submitted: 40
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:55.839570+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92848128 unmapped: 6692864 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:56.839762+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92856320 unmapped: 6684672 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_map Got map version 16
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:57.839960+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 6594560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:58.840074+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 6594560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192745 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa8ac000/0x0/0x4ffc00000, data 0x14f5768/0x15e0000, compress 0x0/0x0/0x0, omap 0x164ea, meta 0x3d59b16), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:59.840202+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 6594560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:00.840526+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa8ac000/0x0/0x4ffc00000, data 0x14f5768/0x15e0000, compress 0x0/0x0/0x0, omap 0x164ea, meta 0x3d59b16), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 6594560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:01.840718+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 6594560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:02.841039+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 6594560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:03.841208+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 6594560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192745 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:04.841407+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 6594560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:05.841603+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 6594560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa8ac000/0x0/0x4ffc00000, data 0x14f5768/0x15e0000, compress 0x0/0x0/0x0, omap 0x164ea, meta 0x3d59b16), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.775607109s of 10.854191780s, submitted: 21
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:06.841812+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 6594560 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:07.841972+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:08.842125+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194453 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:09.842315+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 6569984 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:10.842464+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 6569984 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa8ab000/0x0/0x4ffc00000, data 0x14f582f/0x15e1000, compress 0x0/0x0/0x0, omap 0x164ea, meta 0x3d59b16), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 156 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:11.842625+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 6561792 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:12.842814+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 6561792 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:13.842992+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 6561792 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa8a6000/0x0/0x4ffc00000, data 0x14f736d/0x15e3000, compress 0x0/0x0/0x0, omap 0x164ea, meta 0x3d59b16), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197213 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:14.843175+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa8a6000/0x0/0x4ffc00000, data 0x14f736d/0x15e3000, compress 0x0/0x0/0x0, omap 0x164ea, meta 0x3d59b16), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 6561792 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa8a6000/0x0/0x4ffc00000, data 0x14f736d/0x15e3000, compress 0x0/0x0/0x0, omap 0x164ea, meta 0x3d59b16), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:15.843357+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa8a6000/0x0/0x4ffc00000, data 0x14f736d/0x15e3000, compress 0x0/0x0/0x0, omap 0x164ea, meta 0x3d59b16), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 6561792 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:16.843542+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 6561792 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:17.843668+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 6561792 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa8a6000/0x0/0x4ffc00000, data 0x14f736d/0x15e3000, compress 0x0/0x0/0x0, omap 0x164ea, meta 0x3d59b16), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:18.843847+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 6561792 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197213 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:19.844010+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 6561792 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:20.844206+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 6561792 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.543810844s of 14.834462166s, submitted: 29
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:21.844409+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 6553600 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fa8a4000/0x0/0x4ffc00000, data 0x14f8dec/0x15e6000, compress 0x0/0x0/0x0, omap 0x1652d, meta 0x3d59ad3), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:22.844587+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92995584 unmapped: 6545408 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:23.844753+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92995584 unmapped: 6545408 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202251 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:24.844977+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92995584 unmapped: 6545408 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:25.845163+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92995584 unmapped: 6545408 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:26.845343+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92995584 unmapped: 6545408 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fa89f000/0x0/0x4ffc00000, data 0x14fa9f1/0x15e9000, compress 0x0/0x0/0x0, omap 0x1652d, meta 0x3d59ad3), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:27.845463+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92995584 unmapped: 6545408 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:28.845612+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 6537216 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202251 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:29.845802+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 6537216 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:30.845961+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 6537216 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fa89f000/0x0/0x4ffc00000, data 0x14fa9f1/0x15e9000, compress 0x0/0x0/0x0, omap 0x1652d, meta 0x3d59ad3), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 159 handle_osd_map epochs [160,160], i have 160, src has [1,160]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.380644798s of 10.438771248s, submitted: 36
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:31.846094+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 6529024 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:32.846213+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 6529024 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:33.846346+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 6529024 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206253 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:34.846501+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:35.846690+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:36.846898+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x14fc5d4/0x15ee000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:37.847209+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:38.847367+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207081 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:39.847529+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:40.847642+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89e000/0x0/0x4ffc00000, data 0x14fc5d2/0x15ee000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:41.847787+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89e000/0x0/0x4ffc00000, data 0x14fc5d2/0x15ee000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:42.847900+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:43.848020+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205389 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:44.848182+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89f000/0x0/0x4ffc00000, data 0x14fc50b/0x15ed000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:45.848379+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:46.848517+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:47.848663+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.297727585s of 16.323945999s, submitted: 20
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 6520832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:48.848840+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93028352 unmapped: 6512640 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208645 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:49.848955+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93028352 unmapped: 6512640 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:50.849084+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x14fc66f/0x15ef000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93028352 unmapped: 6512640 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:51.849256+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x14fc66f/0x15ef000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 6504448 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:52.849535+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 6504448 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:53.849727+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 6504448 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208629 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:54.850048+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 6504448 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x14fc66d/0x15ef000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:55.850274+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x14fc66d/0x15ef000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 6504448 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:56.850564+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 6504448 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89e000/0x0/0x4ffc00000, data 0x14fc5a6/0x15ee000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:57.850736+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 6504448 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.305027008s of 10.329172134s, submitted: 10
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:58.850861+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 6504448 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208039 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:59.851598+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 6504448 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89e000/0x0/0x4ffc00000, data 0x14fc5a6/0x15ee000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:00.852080+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89e000/0x0/0x4ffc00000, data 0x14fc5a6/0x15ee000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 6504448 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:01.852359+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 6504448 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:02.852505+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93044736 unmapped: 6496256 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:03.853741+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93044736 unmapped: 6496256 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209013 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:04.854136+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 93044736 unmapped: 6496256 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x14fc5d4/0x15ee000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:05.855137+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:06.855397+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:07.855859+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.115866661s of 10.131819725s, submitted: 9
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:08.856613+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209365 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:09.856785+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89f000/0x0/0x4ffc00000, data 0x14fc470/0x15ec000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa89e000/0x0/0x4ffc00000, data 0x14fc538/0x15ed000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:10.856993+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:11.857161+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:12.857380+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:13.857553+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209365 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:14.857745+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 6578176 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _renew_subs
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa89f000/0x0/0x4ffc00000, data 0x14fc536/0x15ed000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:15.857912+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 6569984 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:16.858166+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 6569984 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:17.858389+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 6569984 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.196979523s of 10.256405830s, submitted: 81
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 161 ms_handle_reset con 0x560d82b7e400 session 0x560d828a8fc0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 161 ms_handle_reset con 0x560d7f8a9800 session 0x560d837ebc00
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:18.858546+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_map Got map version 17
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x14fe0a5/0x15ef000, compress 0x0/0x0/0x0, omap 0x165b3, meta 0x3d59a4d), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 5382144 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:19.858751+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212139 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 5382144 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:20.858906+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 161 handle_osd_map epochs [161,162], i have 161, src has [1,162]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 5382144 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:21.859114+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 5382144 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x14ffbdf/0x15f3000, compress 0x0/0x0/0x0, omap 0x166bf, meta 0x3d59941), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:22.859273+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 5382144 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:23.859475+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 5382144 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:24.859698+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215633 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 5382144 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:25.859905+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94158848 unmapped: 5382144 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:26.860096+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa897000/0x0/0x4ffc00000, data 0x14ffbdf/0x15f3000, compress 0x0/0x0/0x0, omap 0x166bf, meta 0x3d59941), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:27.860264+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:28.860382+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.729774475s of 10.760393143s, submitted: 193
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:29.860533+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213221 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:30.860667+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa89a000/0x0/0x4ffc00000, data 0x14ffb44/0x15f2000, compress 0x0/0x0/0x0, omap 0x166bf, meta 0x3d59941), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:31.860824+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:32.861000+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:33.861148+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa89a000/0x0/0x4ffc00000, data 0x14ffb44/0x15f2000, compress 0x0/0x0/0x0, omap 0x166bf, meta 0x3d59941), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:34.861345+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213221 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:35.861506+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:36.861666+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:37.861800+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa898000/0x0/0x4ffc00000, data 0x14ffc7a/0x15f4000, compress 0x0/0x0/0x0, omap 0x166bf, meta 0x3d59941), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:38.861978+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 5373952 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.998134613s of 10.008211136s, submitted: 6
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:39.862156+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217579 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94175232 unmapped: 5365760 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:40.862367+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa893000/0x0/0x4ffc00000, data 0x150187f/0x15f7000, compress 0x0/0x0/0x0, omap 0x166bf, meta 0x3d59941), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94175232 unmapped: 5365760 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:41.862496+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94175232 unmapped: 5365760 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:42.862608+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94175232 unmapped: 5365760 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:43.862749+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94175232 unmapped: 5365760 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:44.862915+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219811 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94175232 unmapped: 5365760 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa893000/0x0/0x4ffc00000, data 0x150187f/0x15f7000, compress 0x0/0x0/0x0, omap 0x166bf, meta 0x3d59941), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:45.863043+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 5357568 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:46.863191+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 5357568 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:47.863356+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa891000/0x0/0x4ffc00000, data 0x1503263/0x15f9000, compress 0x0/0x0/0x0, omap 0x16702, meta 0x3d598fe), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94183424 unmapped: 5357568 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:48.863521+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 5349376 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:49.863716+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221405 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.603826523s of 10.692914009s, submitted: 38
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa892000/0x0/0x4ffc00000, data 0x15031c8/0x15f8000, compress 0x0/0x0/0x0, omap 0x16702, meta 0x3d598fe), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 5349376 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:50.863858+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 5349376 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:51.864006+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 5349376 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:52.864147+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 5349376 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa894000/0x0/0x4ffc00000, data 0x15031c8/0x15f8000, compress 0x0/0x0/0x0, omap 0x16702, meta 0x3d598fe), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:53.864346+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 5349376 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:54.864520+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222377 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 5349376 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:55.864672+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:56.864808+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:57.864978+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94216192 unmapped: 5324800 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:58.865235+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94216192 unmapped: 5324800 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0x1506a6d/0x15ff000, compress 0x0/0x0/0x0, omap 0x16702, meta 0x3d598fe), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:59.865383+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228501 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94216192 unmapped: 5324800 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:00.865512+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa88b000/0x0/0x4ffc00000, data 0x1506a6d/0x15ff000, compress 0x0/0x0/0x0, omap 0x16702, meta 0x3d598fe), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.033651352s of 11.112089157s, submitted: 61
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:01.865685+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:02.865856+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:03.866005+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:04.866210+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1230685 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:05.866334+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa889000/0x0/0x4ffc00000, data 0x150846d/0x1601000, compress 0x0/0x0/0x0, omap 0x16788, meta 0x3d59878), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:06.866476+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa889000/0x0/0x4ffc00000, data 0x150846d/0x1601000, compress 0x0/0x0/0x0, omap 0x16788, meta 0x3d59878), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:07.866633+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:08.866928+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:09.867086+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa889000/0x0/0x4ffc00000, data 0x150846d/0x1601000, compress 0x0/0x0/0x0, omap 0x16788, meta 0x3d59878), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1230685 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94224384 unmapped: 5316608 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 167 handle_osd_map epochs [167,168], i have 167, src has [1,168]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:10.867215+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.938399315s of 10.001417160s, submitted: 53
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:11.867371+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:12.867482+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:13.867635+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x150bcaf/0x1607000, compress 0x0/0x0/0x0, omap 0x1680e, meta 0x3d597f2), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:14.867856+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236697 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x150bcaf/0x1607000, compress 0x0/0x0/0x0, omap 0x1680e, meta 0x3d597f2), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:15.867998+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:16.868130+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x150bcaf/0x1607000, compress 0x0/0x0/0x0, omap 0x1680e, meta 0x3d597f2), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:17.868278+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:18.868484+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa881000/0x0/0x4ffc00000, data 0x150bcaf/0x1607000, compress 0x0/0x0/0x0, omap 0x1680e, meta 0x3d597f2), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:19.868614+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236697 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:20.868778+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 169 handle_osd_map epochs [170,171], i have 169, src has [1,171]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.968458176s of 10.001696587s, submitted: 32
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:21.868905+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa87d000/0x0/0x4ffc00000, data 0x150f38f/0x160d000, compress 0x0/0x0/0x0, omap 0x16894, meta 0x3d5976c), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:22.869031+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 5292032 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:23.869174+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa87d000/0x0/0x4ffc00000, data 0x150f38f/0x160d000, compress 0x0/0x0/0x0, omap 0x16894, meta 0x3d5976c), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 5423104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:24.869343+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242213 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 5423104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:25.869470+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 5423104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:26.869617+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fa87d000/0x0/0x4ffc00000, data 0x150f38f/0x160d000, compress 0x0/0x0/0x0, omap 0x16894, meta 0x3d5976c), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 5423104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:27.869748+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 5423104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:28.869927+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 5423104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:29.870056+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242213 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 5423104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:30.870172+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 171 handle_osd_map epochs [171,172], i have 171, src has [1,172]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87d000/0x0/0x4ffc00000, data 0x150f38f/0x160d000, compress 0x0/0x0/0x0, omap 0x16894, meta 0x3d5976c), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 5423104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:31.870348+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 5423104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:32.870477+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 5423104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:33.870776+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 5423104 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:34.871027+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244555 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 5496832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:35.871214+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 5496832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:36.871398+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 5496832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:37.871561+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 8843 writes, 32K keys, 8843 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 8843 writes, 2113 syncs, 4.19 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3112 writes, 8422 keys, 3112 commit groups, 1.0 writes per commit group, ingest: 8.08 MB, 0.01 MB/s
                                           Interval WAL: 3112 writes, 1189 syncs, 2.62 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 5496832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:38.871748+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 5496832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:39.871869+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244555 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 5496832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:40.871970+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 5496832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:41.872138+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 5496832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:42.872266+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 5496832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:43.872461+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 5496832 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:44.872640+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244555 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc ms_handle_reset ms_handle_reset con 0x560d82092400
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3695062931
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: get_auth_request con 0x560d82a36c00 auth_method 0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_configure stats_period=5
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:45.872825+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:46.873010+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:47.873162+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:48.873370+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:49.873488+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244555 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:50.873647+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:51.873799+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:52.873922+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:53.874181+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:54.874423+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244555 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:55.874690+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:56.874861+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:57.875026+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:58.875170+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:59.875361+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244555 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:00.875485+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:01.875559+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:02.875725+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:03.876033+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:04.876190+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 5242880 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244555 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:05.876358+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:06.876541+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:07.876775+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:08.876945+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:09.877162+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244555 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:10.877468+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:11.877654+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:12.877847+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:13.877965+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:14.878124+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244555 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:15.878366+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:16.878548+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:17.878694+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fa87a000/0x0/0x4ffc00000, data 0x1510e2e/0x1610000, compress 0x0/0x0/0x0, omap 0x169a0, meta 0x3d59660), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 56.867332458s of 56.905132294s, submitted: 32
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:18.878866+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:19.879084+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246247 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:20.879394+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_map Got map version 18
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:21.879627+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa877000/0x0/0x4ffc00000, data 0x1512a33/0x1613000, compress 0x0/0x0/0x0, omap 0x199f1, meta 0x3d5660f), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:22.879897+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_map Got map version 19
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:23.880174+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:24.880417+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247329 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:25.880601+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:26.880746+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 5308416 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:27.880946+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 5275648 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fa879000/0x0/0x4ffc00000, data 0x1512a33/0x1613000, compress 0x0/0x0/0x0, omap 0x199f1, meta 0x3d5660f), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:28.881164+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 5275648 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.735460281s of 10.886501312s, submitted: 58
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:29.881359+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246609 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:30.881515+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 173 handle_osd_map epochs [173,174], i have 173, src has [1,174]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:31.881693+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa874000/0x0/0x4ffc00000, data 0x15144b2/0x1616000, compress 0x0/0x0/0x0, omap 0x19d02, meta 0x3d562fe), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:32.881825+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa874000/0x0/0x4ffc00000, data 0x15144b2/0x1616000, compress 0x0/0x0/0x0, omap 0x19d02, meta 0x3d562fe), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:33.881989+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa874000/0x0/0x4ffc00000, data 0x15144b2/0x1616000, compress 0x0/0x0/0x0, omap 0x19d02, meta 0x3d562fe), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:34.882172+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250103 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:35.882366+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa874000/0x0/0x4ffc00000, data 0x15144b2/0x1616000, compress 0x0/0x0/0x0, omap 0x19d02, meta 0x3d562fe), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:36.882503+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:37.882655+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:38.882820+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:39.882993+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa874000/0x0/0x4ffc00000, data 0x15144b2/0x1616000, compress 0x0/0x0/0x0, omap 0x19d02, meta 0x3d562fe), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250103 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:40.883185+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:41.883389+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa874000/0x0/0x4ffc00000, data 0x15144b2/0x1616000, compress 0x0/0x0/0x0, omap 0x19d02, meta 0x3d562fe), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:42.883538+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:43.883674+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:44.883823+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250103 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:45.883978+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fa874000/0x0/0x4ffc00000, data 0x15144b2/0x1616000, compress 0x0/0x0/0x0, omap 0x19d02, meta 0x3d562fe), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:46.884091+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:47.884225+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:48.884354+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:49.884491+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94142464 unmapped: 5398528 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 174 handle_osd_map epochs [175,175], i have 174, src has [1,175]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.894744873s of 21.044736862s, submitted: 94
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252877 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:50.884642+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:51.884801+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 175 heartbeat osd_stat(store_statfs(0x4fa871000/0x0/0x4ffc00000, data 0x15160b7/0x1619000, compress 0x0/0x0/0x0, omap 0x19f8d, meta 0x3d56073), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:52.884973+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:53.885112+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 175 heartbeat osd_stat(store_statfs(0x4fa871000/0x0/0x4ffc00000, data 0x15160b7/0x1619000, compress 0x0/0x0/0x0, omap 0x19f8d, meta 0x3d56073), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:54.885249+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252877 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:55.885407+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:56.885575+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 175 handle_osd_map epochs [176,176], i have 175, src has [1,176]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:57.885758+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:58.885943+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:59.886073+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255651 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:00.886376+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:01.886532+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:02.886756+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:03.886935+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:04.887096+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3219406421' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255651 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:05.887262+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:06.887430+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:07.887551+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:08.887723+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:09.887842+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255651 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:10.888006+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:11.888230+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:12.888470+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:13.888647+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:14.888834+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255651 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:15.888988+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:16.889175+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:17.889388+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:18.889577+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:19.889716+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255651 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:20.889851+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:21.890070+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:22.890251+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:23.890431+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:24.890643+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255651 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:25.890830+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:26.890963+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:27.891155+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:28.891387+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:29.891555+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255651 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:30.891745+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:31.891999+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:32.892205+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:33.892376+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:34.892618+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255651 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:35.892807+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:36.892928+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:37.893093+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:38.893237+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:39.893423+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255651 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:40.893675+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:41.893893+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:42.894052+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:43.894199+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:44.894361+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255651 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:45.894482+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:46.894659+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:47.894911+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa86e000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:48.895116+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94150656 unmapped: 5390336 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 59.063335419s of 59.114078522s, submitted: 40
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 176 ms_handle_reset con 0x560d82a70000 session 0x560d80e7c1c0
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:49.895334+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94519296 unmapped: 5021696 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254931 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:50.895491+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94519296 unmapped: 5021696 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_map Got map version 20
Feb 01 15:23:51 compute-0 ceph-osd[88066]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:51.895671+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:52.895819+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa870000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:53.895979+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:54.896183+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254931 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:55.896345+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa870000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:56.896538+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:57.896700+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:58.896846+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa870000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:59.897158+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254931 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:00.897284+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:01.897478+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:02.897620+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa870000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:03.897780+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:04.897942+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254931 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:05.898049+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:06.898825+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa870000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:07.898981+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:08.899120+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:09.899272+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254931 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:10.899447+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:11.899620+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa870000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:12.899762+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:13.899888+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:14.900018+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254931 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:15.900127+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94535680 unmapped: 5005312 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa870000/0x0/0x4ffc00000, data 0x1517b36/0x161c000, compress 0x0/0x0/0x0, omap 0x1a29e, meta 0x3d55d62), peers [0,1] op hist [])
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:16.900289+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 94584832 unmapped: 4956160 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: do_command 'config diff' '{prefix=config diff}'
Feb 01 15:23:51 compute-0 ceph-osd[88066]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 01 15:23:51 compute-0 ceph-osd[88066]: do_command 'config show' '{prefix=config show}'
Feb 01 15:23:51 compute-0 ceph-osd[88066]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 01 15:23:51 compute-0 ceph-osd[88066]: do_command 'counter dump' '{prefix=counter dump}'
Feb 01 15:23:51 compute-0 ceph-osd[88066]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 01 15:23:51 compute-0 ceph-osd[88066]: do_command 'counter schema' '{prefix=counter schema}'
Feb 01 15:23:51 compute-0 ceph-osd[88066]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:17.900403+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 4431872 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:18.900870+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 4431872 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:19.901016+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: prioritycache tune_memory target: 4294967296 mapped: 95133696 unmapped: 4407296 heap: 99540992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:51 compute-0 ceph-osd[88066]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:51 compute-0 ceph-osd[88066]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254931 data_alloc: 218103808 data_used: 9117
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: tick
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_tickets
Feb 01 15:23:51 compute-0 ceph-osd[88066]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:20.901155+0000)
Feb 01 15:23:51 compute-0 ceph-osd[88066]: do_command 'log dump' '{prefix=log dump}'
Feb 01 15:23:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:23:51 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 01 15:23:51 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14564 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.eusbkm", "name": "rgw_frontends"} v 0)
Feb 01 15:23:51 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.eusbkm", "name": "rgw_frontends"} : dispatch
Feb 01 15:23:51 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb 01 15:23:51 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2831620940' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 01 15:23:52 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14568 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:52 compute-0 ceph-mon[75179]: pgmap v1173: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:52 compute-0 ceph-mon[75179]: from='client.14552 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:52 compute-0 ceph-mon[75179]: from='client.14556 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:52 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3219406421' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 01 15:23:52 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.eusbkm", "name": "rgw_frontends"} : dispatch
Feb 01 15:23:52 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2831620940' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 01 15:23:52 compute-0 nova_compute[238794]: 2026-02-01 15:23:52.315 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:23:52 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb 01 15:23:52 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2019968640' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 01 15:23:52 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14572 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:52 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:52 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 01 15:23:52 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3403618103' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 01 15:23:52 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14576 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:53 compute-0 ceph-mon[75179]: from='client.14564 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:53 compute-0 ceph-mon[75179]: from='client.14568 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:53 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2019968640' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 01 15:23:53 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3403618103' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 01 15:23:53 compute-0 nova_compute[238794]: 2026-02-01 15:23:53.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:23:53 compute-0 nova_compute[238794]: 2026-02-01 15:23:53.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:23:53 compute-0 nova_compute[238794]: 2026-02-01 15:23:53.319 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 01 15:23:53 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14580 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 01 15:23:53 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb 01 15:23:53 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3009433075' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 01 15:23:53 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14582 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 01 15:23:53 compute-0 crontab[254827]: (root) LIST (root)
Feb 01 15:23:54 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Feb 01 15:23:54 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3681521670' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Feb 01 15:23:54 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14586 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 01 15:23:54 compute-0 ceph-mon[75179]: from='client.14572 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:54 compute-0 ceph-mon[75179]: pgmap v1174: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:54 compute-0 ceph-mon[75179]: from='client.14576 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:54 compute-0 ceph-mon[75179]: from='client.14580 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 01 15:23:54 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3009433075' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 01 15:23:54 compute-0 ceph-mon[75179]: from='client.14582 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 01 15:23:54 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3681521670' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Feb 01 15:23:54 compute-0 nova_compute[238794]: 2026-02-01 15:23:54.320 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:23:54 compute-0 nova_compute[238794]: 2026-02-01 15:23:54.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 01 15:23:54 compute-0 nova_compute[238794]: 2026-02-01 15:23:54.320 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 01 15:23:54 compute-0 nova_compute[238794]: 2026-02-01 15:23:54.337 238798 DEBUG nova.compute.manager [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 01 15:23:54 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:54 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14590 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 01 15:23:54 compute-0 ceph-mgr[75469]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 01 15:23:54 compute-0 ceph-2bb4558a-e8c9-5691-acbc-5dcfb33a4f0f-mgr-compute-0-viosrg[75465]: 2026-02-01T15:23:54.726+0000 7f8298063640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 01 15:23:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Feb 01 15:23:55 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/472477405' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Feb 01 15:23:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Feb 01 15:23:55 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/998708105' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Feb 01 15:23:55 compute-0 ceph-mon[75179]: from='client.14586 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 01 15:23:55 compute-0 ceph-mon[75179]: pgmap v1175: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:55 compute-0 ceph-mon[75179]: from='client.14590 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 01 15:23:55 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/472477405' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Feb 01 15:23:55 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/998708105' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 96 heartbeat osd_stat(store_statfs(0x4fce8f000/0x0/0x4ffc00000, data 0xfffd0/0x19b000, compress 0x0/0x0/0x0, omap 0xfa0a, meta 0x2bc05f6), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:17.923030+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 1384448 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:18.923235+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:53:48.795152+0000 osd.1 (osd.1) 64 : cluster [DBG] 8.7 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:53:48.805631+0000 osd.1 (osd.1) 65 : cluster [DBG] 8.7 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 1368064 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 65)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:53:48.795152+0000 osd.1 (osd.1) 64 : cluster [DBG] 8.7 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:53:48.805631+0000 osd.1 (osd.1) 65 : cluster [DBG] 8.7 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:19.923497+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 98 heartbeat osd_stat(store_statfs(0x4fce8e000/0x0/0x4ffc00000, data 0x101b6c/0x19e000, compress 0x0/0x0/0x0, omap 0xfc88, meta 0x2bc0378), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 1351680 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 738022 data_alloc: 218103808 data_used: 6261
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:20.923628+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 1351680 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:21.923828+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1343488 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:22.923959+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 98 handle_osd_map epochs [99,100], i have 98, src has [1,100]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 1335296 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:23.924075+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 100 heartbeat osd_stat(store_statfs(0x4fce83000/0x0/0x4ffc00000, data 0x106d25/0x1a7000, compress 0x0/0x0/0x0, omap 0x1018a, meta 0x2bbfe76), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 100 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.382933617s of 10.406901360s, submitted: 22
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 1335296 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 101 heartbeat osd_stat(store_statfs(0x4fce83000/0x0/0x4ffc00000, data 0x106d25/0x1a7000, compress 0x0/0x0/0x0, omap 0x1018a, meta 0x2bbfe76), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:24.924241+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:53:54.771688+0000 osd.1 (osd.1) 66 : cluster [DBG] 11.5 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:53:54.782251+0000 osd.1 (osd.1) 67 : cluster [DBG] 11.5 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1286144 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 751529 data_alloc: 218103808 data_used: 6261
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.d scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.d scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 67)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:53:54.771688+0000 osd.1 (osd.1) 66 : cluster [DBG] 11.5 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:53:54.782251+0000 osd.1 (osd.1) 67 : cluster [DBG] 11.5 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:25.924465+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:53:55.756569+0000 osd.1 (osd.1) 68 : cluster [DBG] 3.d scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:53:55.767122+0000 osd.1 (osd.1) 69 : cluster [DBG] 3.d scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1269760 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 69)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:53:55.756569+0000 osd.1 (osd.1) 68 : cluster [DBG] 3.d scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:53:55.767122+0000 osd.1 (osd.1) 69 : cluster [DBG] 3.d scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:26.924687+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1269760 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:27.924837+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 1261568 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:28.924993+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 103 heartbeat osd_stat(store_statfs(0x4fce78000/0x0/0x4ffc00000, data 0x10bd95/0x1b0000, compress 0x0/0x0/0x0, omap 0x1091c, meta 0x2bbf6e4), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 1261568 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:29.925188+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:53:59.695038+0000 osd.1 (osd.1) 70 : cluster [DBG] 8.5 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:53:59.705585+0000 osd.1 (osd.1) 71 : cluster [DBG] 8.5 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 71)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:53:59.695038+0000 osd.1 (osd.1) 70 : cluster [DBG] 8.5 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:53:59.705585+0000 osd.1 (osd.1) 71 : cluster [DBG] 8.5 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 1261568 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 758405 data_alloc: 218103808 data_used: 6261
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:30.926280+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:00.675243+0000 osd.1 (osd.1) 72 : cluster [DBG] 11.7 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:00.685811+0000 osd.1 (osd.1) 73 : cluster [DBG] 11.7 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 73)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:00.675243+0000 osd.1 (osd.1) 72 : cluster [DBG] 11.7 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:00.685811+0000 osd.1 (osd.1) 73 : cluster [DBG] 11.7 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1253376 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.b scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.b scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:31.926496+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:01.686219+0000 osd.1 (osd.1) 74 : cluster [DBG] 7.b scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:01.696279+0000 osd.1 (osd.1) 75 : cluster [DBG] 7.b scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 75)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:01.686219+0000 osd.1 (osd.1) 74 : cluster [DBG] 7.b scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:01.696279+0000 osd.1 (osd.1) 75 : cluster [DBG] 7.b scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 103 handle_osd_map epochs [103,104], i have 104, src has [1,104]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1253376 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:32.926711+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:02.700968+0000 osd.1 (osd.1) 76 : cluster [DBG] 7.14 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:02.711539+0000 osd.1 (osd.1) 77 : cluster [DBG] 7.14 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 77)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:02.700968+0000 osd.1 (osd.1) 76 : cluster [DBG] 7.14 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:02.711539+0000 osd.1 (osd.1) 77 : cluster [DBG] 7.14 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1236992 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 104 heartbeat osd_stat(store_statfs(0x4fce77000/0x0/0x4ffc00000, data 0x10d931/0x1b3000, compress 0x0/0x0/0x0, omap 0x10ba6, meta 0x2bbf45a), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:33.926910+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1228800 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.441488266s of 10.474997520s, submitted: 19
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:34.927078+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:04.689127+0000 osd.1 (osd.1) 78 : cluster [DBG] 3.10 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:04.699837+0000 osd.1 (osd.1) 79 : cluster [DBG] 3.10 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 79)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:04.689127+0000 osd.1 (osd.1) 78 : cluster [DBG] 3.10 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:04.699837+0000 osd.1 (osd.1) 79 : cluster [DBG] 3.10 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 1220608 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 777561 data_alloc: 218103808 data_used: 6261
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:35.927278+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1212416 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:36.927409+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 1204224 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:37.927594+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:07.755555+0000 osd.1 (osd.1) 80 : cluster [DBG] 7.16 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:07.766108+0000 osd.1 (osd.1) 81 : cluster [DBG] 7.16 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 81)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:07.755555+0000 osd.1 (osd.1) 80 : cluster [DBG] 7.16 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:07.766108+0000 osd.1 (osd.1) 81 : cluster [DBG] 7.16 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 107 handle_osd_map epochs [108,109], i have 107, src has [1,109]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1187840 heap: 81698816 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:38.928470+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:08.720664+0000 osd.1 (osd.1) 82 : cluster [DBG] 8.19 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:08.734900+0000 osd.1 (osd.1) 83 : cluster [DBG] 8.19 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 83)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:08.720664+0000 osd.1 (osd.1) 82 : cluster [DBG] 8.19 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:08.734900+0000 osd.1 (osd.1) 83 : cluster [DBG] 8.19 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0x115e69/0x1c2000, compress 0x0/0x0/0x0, omap 0x115e2, meta 0x2bbea1e), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 2236416 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fce66000/0x0/0x4ffc00000, data 0x115e69/0x1c2000, compress 0x0/0x0/0x0, omap 0x115e2, meta 0x2bbea1e), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:39.928619+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 2170880 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 791269 data_alloc: 218103808 data_used: 6538
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:40.928811+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:10.768599+0000 osd.1 (osd.1) 84 : cluster [DBG] 3.13 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:10.779236+0000 osd.1 (osd.1) 85 : cluster [DBG] 3.13 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 85)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:10.768599+0000 osd.1 (osd.1) 84 : cluster [DBG] 3.13 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:10.779236+0000 osd.1 (osd.1) 85 : cluster [DBG] 3.13 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 2154496 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:41.928993+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:11.771258+0000 osd.1 (osd.1) 86 : cluster [DBG] 7.17 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:11.781823+0000 osd.1 (osd.1) 87 : cluster [DBG] 7.17 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 109 handle_osd_map epochs [109,110], i have 109, src has [1,110]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 87)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:11.771258+0000 osd.1 (osd.1) 86 : cluster [DBG] 7.17 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:11.781823+0000 osd.1 (osd.1) 87 : cluster [DBG] 7.17 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 2203648 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:42.929152+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 2195456 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:43.929343+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:13.779389+0000 osd.1 (osd.1) 88 : cluster [DBG] 7.10 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:13.789997+0000 osd.1 (osd.1) 89 : cluster [DBG] 7.10 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 89)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:13.779389+0000 osd.1 (osd.1) 88 : cluster [DBG] 7.10 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:13.789997+0000 osd.1 (osd.1) 89 : cluster [DBG] 7.10 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 2449408 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.084319115s of 10.127679825s, submitted: 20
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:44.929540+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:14.816395+0000 osd.1 (osd.1) 90 : cluster [DBG] 3.14 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:14.826927+0000 osd.1 (osd.1) 91 : cluster [DBG] 3.14 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 110 handle_osd_map epochs [111,112], i have 110, src has [1,112]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 91)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:14.816395+0000 osd.1 (osd.1) 90 : cluster [DBG] 3.14 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:14.826927+0000 osd.1 (osd.1) 91 : cluster [DBG] 3.14 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 2441216 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 808779 data_alloc: 218103808 data_used: 6538
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 112 heartbeat osd_stat(store_statfs(0x4fce67000/0x0/0x4ffc00000, data 0x117a05/0x1c5000, compress 0x0/0x0/0x0, omap 0x11876, meta 0x2bbe78a), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:45.929928+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 112 heartbeat osd_stat(store_statfs(0x4fce5f000/0x0/0x4ffc00000, data 0x11b022/0x1cb000, compress 0x0/0x0/0x0, omap 0x11b0c, meta 0x2bbe4f4), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 2539520 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:46.930047+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:16.840853+0000 osd.1 (osd.1) 92 : cluster [DBG] 11.1d scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:16.851415+0000 osd.1 (osd.1) 93 : cluster [DBG] 11.1d scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 93)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:16.840853+0000 osd.1 (osd.1) 92 : cluster [DBG] 11.1d scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:16.851415+0000 osd.1 (osd.1) 93 : cluster [DBG] 11.1d scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 2539520 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:47.930182+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:17.843650+0000 osd.1 (osd.1) 94 : cluster [DBG] 8.1e scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:17.854217+0000 osd.1 (osd.1) 95 : cluster [DBG] 8.1e scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 95)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:17.843650+0000 osd.1 (osd.1) 94 : cluster [DBG] 8.1e scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:17.854217+0000 osd.1 (osd.1) 95 : cluster [DBG] 8.1e scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 2531328 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:48.930378+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:18.819853+0000 osd.1 (osd.1) 96 : cluster [DBG] 7.12 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:18.830387+0000 osd.1 (osd.1) 97 : cluster [DBG] 7.12 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 97)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:18.819853+0000 osd.1 (osd.1) 96 : cluster [DBG] 7.12 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:18.830387+0000 osd.1 (osd.1) 97 : cluster [DBG] 7.12 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 2473984 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:49.930579+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 115 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x11ff3d/0x1d4000, compress 0x0/0x0/0x0, omap 0x12257, meta 0x2bbdda9), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 2465792 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 823574 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:50.930699+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:20.768233+0000 osd.1 (osd.1) 98 : cluster [DBG] 5.19 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:20.778716+0000 osd.1 (osd.1) 99 : cluster [DBG] 5.19 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 99)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:20.768233+0000 osd.1 (osd.1) 98 : cluster [DBG] 5.19 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:20.778716+0000 osd.1 (osd.1) 99 : cluster [DBG] 5.19 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 2465792 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:51.930881+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f(unlocked)] enter Initial
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=0 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000127 0 0.000000
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=0 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000030 1 0.000149
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000240 1 0.000158
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000049 0 0.000000
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000325 0 0.000000
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 2457600 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:52.931044+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.011667 2 0.000101
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.012066 0 0.000000
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.012157 0 0.000000
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=116) [1] r=0 lpr=116 pi=[66,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000133 1 0.000220
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000028 0 0.000000
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce53000/0x0/0x4ffc00000, data 0x121ad9/0x1d7000, compress 0x0/0x0/0x0, omap 0x124f4, meta 0x2bbdb0c), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 2449408 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:53.931220+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 2449408 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.984783173s of 10.033134460s, submitted: 23
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 118 pg[9.1f( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=38'483 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.607841 5 0.000098
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 118 pg[9.1f( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=38'483 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 118 pg[9.1f( v 38'483 lc 0'0 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=66/66 les/c/f=67/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 crt=38'483 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 118 pg[9.1f( v 38'483 lc 38'140 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 pct=0'0 crt=38'483 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.004135 4 0.000239
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 118 pg[9.1f( v 38'483 lc 38'140 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 pct=0'0 crt=38'483 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 118 pg[9.1f( v 38'483 lc 38'140 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 pct=0'0 crt=38'483 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000110 1 0.000073
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 118 pg[9.1f( v 38'483 lc 38'140 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 pct=0'0 crt=38'483 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 pct=0'0 crt=38'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.039386 1 0.000049
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 118 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 pct=0'0 crt=38'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:54.931385+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:24.850068+0000 osd.1 (osd.1) 100 : cluster [DBG] 5.18 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:24.860650+0000 osd.1 (osd.1) 101 : cluster [DBG] 5.18 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 118 handle_osd_map epochs [118,119], i have 118, src has [1,119]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 pct=0'0 crt=38'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.376952 1 0.000082
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 pct=0'0 crt=38'483 active+remapped mbc={}] exit Started/ReplicaActive 0.420769 0 0.000000
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 pct=0'0 crt=38'483 active+remapped mbc={}] exit Started 2.028705 0 0.000000
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[66,117)/1 pct=0'0 crt=38'483 active+remapped mbc={}] enter Reset
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 pct=0'0 crt=38'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000157 1 0.000220
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 unknown mbc={}] enter Started
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 unknown mbc={}] enter Start
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 101)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:24.850068+0000 osd.1 (osd.1) 100 : cluster [DBG] 5.18 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:24.860650+0000 osd.1 (osd.1) 101 : cluster [DBG] 5.18 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004526 2 0.000063
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=0/0 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Feb 01 15:23:55 compute-0 ceph-osd[87011]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001163 2 0.000128
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 119 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 2359296 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 850143 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:55.931640+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:25.844915+0000 osd.1 (osd.1) 102 : cluster [DBG] 5.1a scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:25.855522+0000 osd.1 (osd.1) 103 : cluster [DBG] 5.1a scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 119 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003746 2 0.000104
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009550 0 0.000000
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=117/118 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=119/120 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=119/120 n=6 ec=48/32 lis/c=117/66 les/c/f=118/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=119/120 n=6 ec=48/32 lis/c=119/66 les/c/f=120/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003072 4 0.000177
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=119/120 n=6 ec=48/32 lis/c=119/66 les/c/f=120/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=119/120 n=6 ec=48/32 lis/c=119/66 les/c/f=120/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000027 0 0.000000
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 pg_epoch: 120 pg[9.1f( v 38'483 (0'0,38'483] local-lis/les=119/120 n=6 ec=48/32 lis/c=119/66 les/c/f=120/67/0 sis=119) [1] r=0 lpr=119 pi=[66,119)/1 crt=38'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 103)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:25.844915+0000 osd.1 (osd.1) 102 : cluster [DBG] 5.1a scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:25.855522+0000 osd.1 (osd.1) 103 : cluster [DBG] 5.1a scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 2351104 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:56.931855+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:26.851201+0000 osd.1 (osd.1) 104 : cluster [DBG] 5.1d scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:26.861703+0000 osd.1 (osd.1) 105 : cluster [DBG] 5.1d scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 105)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:26.851201+0000 osd.1 (osd.1) 104 : cluster [DBG] 5.1d scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:26.861703+0000 osd.1 (osd.1) 105 : cluster [DBG] 5.1d scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 2351104 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:57.932038+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce42000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 2342912 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:58.932234+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 2326528 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:59.932370+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 2318336 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858207 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:00.932453+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 2318336 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:01.932589+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 2318336 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce42000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:02.932754+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:32.752556+0000 osd.1 (osd.1) 106 : cluster [DBG] 10.13 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:32.763107+0000 osd.1 (osd.1) 107 : cluster [DBG] 10.13 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 107)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:32.752556+0000 osd.1 (osd.1) 106 : cluster [DBG] 10.13 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:32.763107+0000 osd.1 (osd.1) 107 : cluster [DBG] 10.13 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 2293760 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:03.932967+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 2277376 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:04.933064+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:34.735467+0000 osd.1 (osd.1) 108 : cluster [DBG] 10.10 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:34.746034+0000 osd.1 (osd.1) 109 : cluster [DBG] 10.10 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 109)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:34.735467+0000 osd.1 (osd.1) 108 : cluster [DBG] 10.10 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:34.746034+0000 osd.1 (osd.1) 109 : cluster [DBG] 10.10 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 2269184 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 861245 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.880071640s of 10.929857254s, submitted: 28
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:05.933229+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:35.779821+0000 osd.1 (osd.1) 110 : cluster [DBG] 10.11 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:35.790342+0000 osd.1 (osd.1) 111 : cluster [DBG] 10.11 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 111)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:35.779821+0000 osd.1 (osd.1) 110 : cluster [DBG] 10.11 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:35.790342+0000 osd.1 (osd.1) 111 : cluster [DBG] 10.11 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 2252800 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:06.933498+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 2252800 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:07.933653+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 2244608 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:08.933838+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 2244608 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:09.934016+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:39.657646+0000 osd.1 (osd.1) 112 : cluster [DBG] 5.1 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:39.667674+0000 osd.1 (osd.1) 113 : cluster [DBG] 5.1 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 2244608 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 866071 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 113)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:39.657646+0000 osd.1 (osd.1) 112 : cluster [DBG] 5.1 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:39.667674+0000 osd.1 (osd.1) 113 : cluster [DBG] 5.1 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:10.934226+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 2236416 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:11.934408+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 2236416 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:12.934540+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 2236416 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:13.934712+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:43.598276+0000 osd.1 (osd.1) 114 : cluster [DBG] 2.6 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:43.608896+0000 osd.1 (osd.1) 115 : cluster [DBG] 2.6 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 2228224 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 115)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:43.598276+0000 osd.1 (osd.1) 114 : cluster [DBG] 2.6 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:43.608896+0000 osd.1 (osd.1) 115 : cluster [DBG] 2.6 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:14.934936+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 2228224 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 868482 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:15.935108+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.f scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.737944603s of 10.750078201s, submitted: 6
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.f scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 2220032 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:16.936063+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:46.530026+0000 osd.1 (osd.1) 116 : cluster [DBG] 10.f scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:46.540656+0000 osd.1 (osd.1) 117 : cluster [DBG] 10.f scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 2220032 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 117)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:46.530026+0000 osd.1 (osd.1) 116 : cluster [DBG] 10.f scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:46.540656+0000 osd.1 (osd.1) 117 : cluster [DBG] 10.f scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:17.936672+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:47.537786+0000 osd.1 (osd.1) 118 : cluster [DBG] 4.2 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:47.547587+0000 osd.1 (osd.1) 119 : cluster [DBG] 4.2 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 2203648 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 119)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:47.537786+0000 osd.1 (osd.1) 118 : cluster [DBG] 4.2 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:47.547587+0000 osd.1 (osd.1) 119 : cluster [DBG] 4.2 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:18.937206+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 2203648 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:19.937425+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:49.489649+0000 osd.1 (osd.1) 120 : cluster [DBG] 2.7 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:49.500279+0000 osd.1 (osd.1) 121 : cluster [DBG] 2.7 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 2195456 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 875717 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 121)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:49.489649+0000 osd.1 (osd.1) 120 : cluster [DBG] 2.7 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:49.500279+0000 osd.1 (osd.1) 121 : cluster [DBG] 2.7 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:20.937755+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 2195456 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:21.938562+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 2195456 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:22.938784+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 2187264 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:23.939189+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 2187264 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:24.939365+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 875717 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 2170880 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:25.939504+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 2170880 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:26.939620+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 2162688 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:27.939783+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 2162688 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:28.939939+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.895403862s of 12.907132149s, submitted: 6
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 2154496 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:29.940098+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:59.437053+0000 osd.1 (osd.1) 122 : cluster [DBG] 2.4 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:54:59.447548+0000 osd.1 (osd.1) 123 : cluster [DBG] 2.4 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 123)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:59.437053+0000 osd.1 (osd.1) 122 : cluster [DBG] 2.4 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:54:59.447548+0000 osd.1 (osd.1) 123 : cluster [DBG] 2.4 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.f scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.f scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880539 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 2138112 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:30.940507+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:00.473713+0000 osd.1 (osd.1) 124 : cluster [DBG] 4.f scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:00.484395+0000 osd.1 (osd.1) 125 : cluster [DBG] 4.f scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 125)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:00.473713+0000 osd.1 (osd.1) 124 : cluster [DBG] 4.f scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:00.484395+0000 osd.1 (osd.1) 125 : cluster [DBG] 4.f scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.a scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.a scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 2138112 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:31.940689+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:01.449267+0000 osd.1 (osd.1) 126 : cluster [DBG] 2.a scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:01.459845+0000 osd.1 (osd.1) 127 : cluster [DBG] 2.a scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 127)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:01.449267+0000 osd.1 (osd.1) 126 : cluster [DBG] 2.a scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:01.459845+0000 osd.1 (osd.1) 127 : cluster [DBG] 2.a scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80617472 unmapped: 2129920 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:32.940885+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80617472 unmapped: 2129920 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:33.941013+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80617472 unmapped: 2129920 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:34.941133+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 882950 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 2121728 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:35.941361+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 2121728 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:36.941513+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.d scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.d scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 2113536 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:37.941673+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:07.443872+0000 osd.1 (osd.1) 128 : cluster [DBG] 4.d scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:07.454476+0000 osd.1 (osd.1) 129 : cluster [DBG] 4.d scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 129)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:07.443872+0000 osd.1 (osd.1) 128 : cluster [DBG] 4.d scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:07.454476+0000 osd.1 (osd.1) 129 : cluster [DBG] 4.d scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 2105344 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:38.941940+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80650240 unmapped: 2097152 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:39.942125+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885361 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80650240 unmapped: 2097152 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:40.942272+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 2088960 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:41.942429+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.042222023s of 13.057113647s, submitted: 8
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 2088960 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:42.942740+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:12.494349+0000 osd.1 (osd.1) 130 : cluster [DBG] 10.2 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:12.504920+0000 osd.1 (osd.1) 131 : cluster [DBG] 10.2 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 131)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:12.494349+0000 osd.1 (osd.1) 130 : cluster [DBG] 10.2 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:12.504920+0000 osd.1 (osd.1) 131 : cluster [DBG] 10.2 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 2088960 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:43.942920+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.c scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.c scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 2064384 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:44.943042+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:14.482690+0000 osd.1 (osd.1) 132 : cluster [DBG] 5.c scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:14.493359+0000 osd.1 (osd.1) 133 : cluster [DBG] 5.c scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 133)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:14.482690+0000 osd.1 (osd.1) 132 : cluster [DBG] 5.c scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:14.493359+0000 osd.1 (osd.1) 133 : cluster [DBG] 5.c scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890185 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 2056192 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:45.943290+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 2056192 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:46.943528+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 2048000 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:47.943657+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 2048000 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:48.943825+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 2031616 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:49.944056+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:19.397563+0000 osd.1 (osd.1) 134 : cluster [DBG] 2.5 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:19.408110+0000 osd.1 (osd.1) 135 : cluster [DBG] 2.5 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 135)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:19.397563+0000 osd.1 (osd.1) 134 : cluster [DBG] 2.5 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:19.408110+0000 osd.1 (osd.1) 135 : cluster [DBG] 2.5 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.b scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.b scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 895009 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 2023424 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:50.944253+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:20.429733+0000 osd.1 (osd.1) 136 : cluster [DBG] 10.b scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:20.440323+0000 osd.1 (osd.1) 137 : cluster [DBG] 10.b scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 137)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:20.429733+0000 osd.1 (osd.1) 136 : cluster [DBG] 10.b scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:20.440323+0000 osd.1 (osd.1) 137 : cluster [DBG] 10.b scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 2015232 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:51.944454+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 2015232 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:52.944568+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.876013756s of 10.889564514s, submitted: 8
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 2015232 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:53.944707+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:23.383285+0000 osd.1 (osd.1) 138 : cluster [DBG] 2.3 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:23.393938+0000 osd.1 (osd.1) 139 : cluster [DBG] 2.3 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 139)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:23.383285+0000 osd.1 (osd.1) 138 : cluster [DBG] 2.3 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:23.393938+0000 osd.1 (osd.1) 139 : cluster [DBG] 2.3 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 1998848 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:54.944887+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897420 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 1998848 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:55.945015+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 1990656 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:56.945206+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 1990656 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:57.945397+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 1982464 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:58.945534+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:28.354069+0000 osd.1 (osd.1) 140 : cluster [DBG] 4.5 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:28.364080+0000 osd.1 (osd.1) 141 : cluster [DBG] 4.5 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 1982464 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 141)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:28.354069+0000 osd.1 (osd.1) 140 : cluster [DBG] 4.5 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:28.364080+0000 osd.1 (osd.1) 141 : cluster [DBG] 4.5 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:59.945751+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899831 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 1982464 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:00.945906+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 1974272 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:01.946060+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.d scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.d scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 1966080 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:02.946194+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:32.304231+0000 osd.1 (osd.1) 142 : cluster [DBG] 2.d scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:32.314826+0000 osd.1 (osd.1) 143 : cluster [DBG] 2.d scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80781312 unmapped: 1966080 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 143)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:32.304231+0000 osd.1 (osd.1) 142 : cluster [DBG] 2.d scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:32.314826+0000 osd.1 (osd.1) 143 : cluster [DBG] 2.d scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:03.946382+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 1949696 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:04.946523+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.887371063s of 11.902037621s, submitted: 6
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904653 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80797696 unmapped: 1949696 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:05.946645+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:35.285911+0000 osd.1 (osd.1) 144 : cluster [DBG] 4.4 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:35.296467+0000 osd.1 (osd.1) 145 : cluster [DBG] 4.4 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 145)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:35.285911+0000 osd.1 (osd.1) 144 : cluster [DBG] 4.4 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:35.296467+0000 osd.1 (osd.1) 145 : cluster [DBG] 4.4 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 1941504 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:06.946945+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80805888 unmapped: 1941504 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:07.947149+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 1933312 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:08.947345+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 1933312 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:09.947492+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904653 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80822272 unmapped: 1925120 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:10.947610+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 1916928 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:11.947770+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 1916928 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:12.947984+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 1908736 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:13.948096+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.f scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.f scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:14.948235+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:44.141555+0000 osd.1 (osd.1) 146 : cluster [DBG] 5.f scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:44.152128+0000 osd.1 (osd.1) 147 : cluster [DBG] 5.f scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 1908736 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 147)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:44.141555+0000 osd.1 (osd.1) 146 : cluster [DBG] 5.f scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:44.152128+0000 osd.1 (osd.1) 147 : cluster [DBG] 5.f scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907064 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:15.948374+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80838656 unmapped: 1908736 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:16.948520+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 1900544 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:17.948654+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 1900544 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:18.948805+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 1892352 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:19.948967+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 1892352 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907064 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:20.949178+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 1884160 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:21.949432+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 1884160 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.882419586s of 16.891012192s, submitted: 4
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:22.949605+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:52.176991+0000 osd.1 (osd.1) 148 : cluster [DBG] 2.9 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:52.187583+0000 osd.1 (osd.1) 149 : cluster [DBG] 2.9 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 1875968 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 149)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:52.176991+0000 osd.1 (osd.1) 148 : cluster [DBG] 2.9 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:52.187583+0000 osd.1 (osd.1) 149 : cluster [DBG] 2.9 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:23.949814+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 1875968 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:24.949946+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 1875968 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909475 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:25.950106+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 1875968 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:26.950226+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 1875968 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:27.950344+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:57.213105+0000 osd.1 (osd.1) 150 : cluster [DBG] 4.9 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:55:57.223957+0000 osd.1 (osd.1) 151 : cluster [DBG] 4.9 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 1867776 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 151)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:57.213105+0000 osd.1 (osd.1) 150 : cluster [DBG] 4.9 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:55:57.223957+0000 osd.1 (osd.1) 151 : cluster [DBG] 4.9 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:28.950513+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 1867776 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:29.950655+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 1867776 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911886 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:30.950787+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 1859584 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:31.950936+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:01.248937+0000 osd.1 (osd.1) 152 : cluster [DBG] 5.9 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:01.259321+0000 osd.1 (osd.1) 153 : cluster [DBG] 5.9 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 1859584 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 153)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:01.248937+0000 osd.1 (osd.1) 152 : cluster [DBG] 5.9 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:01.259321+0000 osd.1 (osd.1) 153 : cluster [DBG] 5.9 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:32.951268+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80896000 unmapped: 1851392 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:33.951585+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80896000 unmapped: 1851392 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:34.951997+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 1843200 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:35.952134+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914297 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 1835008 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.116877556s of 14.127217293s, submitted: 6
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:36.952276+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:06.304156+0000 osd.1 (osd.1) 154 : cluster [DBG] 10.6 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:06.314702+0000 osd.1 (osd.1) 155 : cluster [DBG] 10.6 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 1826816 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 155)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:06.304156+0000 osd.1 (osd.1) 154 : cluster [DBG] 10.6 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:06.314702+0000 osd.1 (osd.1) 155 : cluster [DBG] 10.6 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:37.952486+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:07.328401+0000 osd.1 (osd.1) 156 : cluster [DBG] 10.19 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:07.339030+0000 osd.1 (osd.1) 157 : cluster [DBG] 10.19 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 1818624 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 157)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:07.328401+0000 osd.1 (osd.1) 156 : cluster [DBG] 10.19 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:07.339030+0000 osd.1 (osd.1) 157 : cluster [DBG] 10.19 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:38.952744+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 1810432 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:39.952940+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:09.321217+0000 osd.1 (osd.1) 158 : cluster [DBG] 5.16 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:09.331813+0000 osd.1 (osd.1) 159 : cluster [DBG] 5.16 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 1794048 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:40.953167+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 159)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:09.321217+0000 osd.1 (osd.1) 158 : cluster [DBG] 5.16 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:09.331813+0000 osd.1 (osd.1) 159 : cluster [DBG] 5.16 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921538 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 1794048 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:41.953320+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 1794048 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:42.953456+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 1785856 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:43.953659+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:13.295650+0000 osd.1 (osd.1) 160 : cluster [DBG] 5.12 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:13.306225+0000 osd.1 (osd.1) 161 : cluster [DBG] 5.12 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 161)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:13.295650+0000 osd.1 (osd.1) 160 : cluster [DBG] 5.12 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:13.306225+0000 osd.1 (osd.1) 161 : cluster [DBG] 5.12 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 1785856 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:44.953879+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 1777664 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:45.954084+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923951 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 1769472 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:46.954208+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 1769472 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:47.954341+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 1761280 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:48.954497+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 1761280 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:49.954658+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 1753088 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.980370522s of 13.995172501s, submitted: 8
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:50.954882+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:20.299350+0000 osd.1 (osd.1) 162 : cluster [DBG] 2.15 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:20.309915+0000 osd.1 (osd.1) 163 : cluster [DBG] 2.15 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 926364 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 1736704 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 163)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:20.299350+0000 osd.1 (osd.1) 162 : cluster [DBG] 2.15 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:20.309915+0000 osd.1 (osd.1) 163 : cluster [DBG] 2.15 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:51.955127+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:21.314202+0000 osd.1 (osd.1) 164 : cluster [DBG] 2.17 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:21.324737+0000 osd.1 (osd.1) 165 : cluster [DBG] 2.17 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 1720320 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 165)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:21.314202+0000 osd.1 (osd.1) 164 : cluster [DBG] 2.17 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:21.324737+0000 osd.1 (osd.1) 165 : cluster [DBG] 2.17 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:52.955383+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 1720320 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:53.955583+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 1720320 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:54.955743+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:24.356468+0000 osd.1 (osd.1) 166 : cluster [DBG] 4.8 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:24.366942+0000 osd.1 (osd.1) 167 : cluster [DBG] 4.8 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81035264 unmapped: 1712128 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 167)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:24.356468+0000 osd.1 (osd.1) 166 : cluster [DBG] 4.8 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:24.366942+0000 osd.1 (osd.1) 167 : cluster [DBG] 4.8 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:55.955980+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931188 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81035264 unmapped: 1712128 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:56.956124+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 1703936 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:57.956327+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:27.319685+0000 osd.1 (osd.1) 168 : cluster [DBG] 4.12 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:27.330248+0000 osd.1 (osd.1) 169 : cluster [DBG] 4.12 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 1703936 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 169)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:27.319685+0000 osd.1 (osd.1) 168 : cluster [DBG] 4.12 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:27.330248+0000 osd.1 (osd.1) 169 : cluster [DBG] 4.12 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:58.956679+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 1703936 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:59.956858+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81051648 unmapped: 1695744 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:00.957011+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933601 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81051648 unmapped: 1695744 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.049659729s of 11.065093994s, submitted: 8
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:01.961463+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:31.364529+0000 osd.1 (osd.1) 170 : cluster [DBG] 5.11 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:31.374982+0000 osd.1 (osd.1) 171 : cluster [DBG] 5.11 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 1687552 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 171)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:31.364529+0000 osd.1 (osd.1) 170 : cluster [DBG] 5.11 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:31.374982+0000 osd.1 (osd.1) 171 : cluster [DBG] 5.11 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:02.965851+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 1687552 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:03.966248+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:33.303572+0000 osd.1 (osd.1) 172 : cluster [DBG] 10.1a scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:33.314039+0000 osd.1 (osd.1) 173 : cluster [DBG] 10.1a scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 1687552 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 173)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:33.303572+0000 osd.1 (osd.1) 172 : cluster [DBG] 10.1a scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:33.314039+0000 osd.1 (osd.1) 173 : cluster [DBG] 10.1a scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:04.966599+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 1679360 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:05.966958+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938429 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 1679360 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:06.967189+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 1671168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:07.967345+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:37.287188+0000 osd.1 (osd.1) 174 : cluster [DBG] 5.13 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:37.297761+0000 osd.1 (osd.1) 175 : cluster [DBG] 5.13 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 1671168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 175)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:37.287188+0000 osd.1 (osd.1) 174 : cluster [DBG] 5.13 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:37.297761+0000 osd.1 (osd.1) 175 : cluster [DBG] 5.13 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:08.967580+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 1662976 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:09.967714+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:39.240637+0000 osd.1 (osd.1) 176 : cluster [DBG] 4.10 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:39.251189+0000 osd.1 (osd.1) 177 : cluster [DBG] 4.10 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 1654784 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 177)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:39.240637+0000 osd.1 (osd.1) 176 : cluster [DBG] 4.10 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:39.251189+0000 osd.1 (osd.1) 177 : cluster [DBG] 4.10 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:10.967905+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943255 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 1646592 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:11.968053+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81108992 unmapped: 1638400 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:12.968201+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81108992 unmapped: 1638400 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:13.968349+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.760448456s of 12.774148941s, submitted: 8
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 1630208 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:14.968520+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:44.138702+0000 osd.1 (osd.1) 178 : cluster [DBG] 4.14 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:44.149279+0000 osd.1 (osd.1) 179 : cluster [DBG] 4.14 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 1630208 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 179)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:44.138702+0000 osd.1 (osd.1) 178 : cluster [DBG] 4.14 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:44.149279+0000 osd.1 (osd.1) 179 : cluster [DBG] 4.14 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:15.968735+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945668 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 1622016 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:16.969051+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 1613824 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:17.969238+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:47.133353+0000 osd.1 (osd.1) 180 : cluster [DBG] 10.12 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:47.147463+0000 osd.1 (osd.1) 181 : cluster [DBG] 10.12 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 1613824 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:18.969509+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 4 last_log 183 sent 181 num 4 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:48.111493+0000 osd.1 (osd.1) 182 : cluster [DBG] 10.14 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:48.125629+0000 osd.1 (osd.1) 183 : cluster [DBG] 10.14 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 1605632 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 181)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:47.133353+0000 osd.1 (osd.1) 180 : cluster [DBG] 10.12 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:47.147463+0000 osd.1 (osd.1) 181 : cluster [DBG] 10.12 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 183)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:48.111493+0000 osd.1 (osd.1) 182 : cluster [DBG] 10.14 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:48.125629+0000 osd.1 (osd.1) 183 : cluster [DBG] 10.14 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:19.969738+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:49.125870+0000 osd.1 (osd.1) 184 : cluster [DBG] 6.2 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:49.136447+0000 osd.1 (osd.1) 185 : cluster [DBG] 6.2 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 1605632 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 185)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:49.125870+0000 osd.1 (osd.1) 184 : cluster [DBG] 6.2 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:49.136447+0000 osd.1 (osd.1) 185 : cluster [DBG] 6.2 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:20.969943+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952909 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 1597440 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:21.970097+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:51.132375+0000 osd.1 (osd.1) 186 : cluster [DBG] 6.6 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:51.146528+0000 osd.1 (osd.1) 187 : cluster [DBG] 6.6 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 1581056 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 187)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:51.132375+0000 osd.1 (osd.1) 186 : cluster [DBG] 6.6 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:51.146528+0000 osd.1 (osd.1) 187 : cluster [DBG] 6.6 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:22.970381+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 1572864 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:23.970592+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 1572864 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:24.970738+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 1572864 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.d scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.959541321s of 11.087907791s, submitted: 10
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.d scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:25.970973+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:55.226620+0000 osd.1 (osd.1) 188 : cluster [DBG] 6.d scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:55.244239+0000 osd.1 (osd.1) 189 : cluster [DBG] 6.d scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957731 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 1564672 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 189)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:55.226620+0000 osd.1 (osd.1) 188 : cluster [DBG] 6.d scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:55.244239+0000 osd.1 (osd.1) 189 : cluster [DBG] 6.d scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:26.971271+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 1564672 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:27.972052+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:57.227290+0000 osd.1 (osd.1) 190 : cluster [DBG] 6.4 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:57.252070+0000 osd.1 (osd.1) 191 : cluster [DBG] 6.4 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 1556480 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.e scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.e scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 191)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:57.227290+0000 osd.1 (osd.1) 190 : cluster [DBG] 6.4 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:57.252070+0000 osd.1 (osd.1) 191 : cluster [DBG] 6.4 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:28.973541+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:58.266389+0000 osd.1 (osd.1) 192 : cluster [DBG] 6.e scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:56:58.280531+0000 osd.1 (osd.1) 193 : cluster [DBG] 6.e scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 1556480 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 193)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:58.266389+0000 osd.1 (osd.1) 192 : cluster [DBG] 6.e scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:56:58.280531+0000 osd.1 (osd.1) 193 : cluster [DBG] 6.e scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:29.974583+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 1548288 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:30.974799+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:00.324806+0000 osd.1 (osd.1) 194 : cluster [DBG] 6.1 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:00.335363+0000 osd.1 (osd.1) 195 : cluster [DBG] 6.1 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964964 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 1548288 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.c scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.c scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 195)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:00.324806+0000 osd.1 (osd.1) 194 : cluster [DBG] 6.1 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:00.335363+0000 osd.1 (osd.1) 195 : cluster [DBG] 6.1 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:31.975455+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:01.355202+0000 osd.1 (osd.1) 196 : cluster [DBG] 6.c scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:01.368770+0000 osd.1 (osd.1) 197 : cluster [DBG] 6.c scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 1548288 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.b scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 6.b scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 197)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:01.355202+0000 osd.1 (osd.1) 196 : cluster [DBG] 6.c scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:01.368770+0000 osd.1 (osd.1) 197 : cluster [DBG] 6.c scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:32.975710+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:02.346684+0000 osd.1 (osd.1) 198 : cluster [DBG] 6.b scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:02.360668+0000 osd.1 (osd.1) 199 : cluster [DBG] 6.b scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 1540096 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:33.975927+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 4 last_log 201 sent 199 num 4 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:03.316019+0000 osd.1 (osd.1) 200 : cluster [DBG] 9.15 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:03.340739+0000 osd.1 (osd.1) 201 : cluster [DBG] 9.15 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 199)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:02.346684+0000 osd.1 (osd.1) 198 : cluster [DBG] 6.b scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:02.360668+0000 osd.1 (osd.1) 199 : cluster [DBG] 6.b scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 1523712 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:34.976120+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 201)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:03.316019+0000 osd.1 (osd.1) 200 : cluster [DBG] 9.15 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:03.340739+0000 osd.1 (osd.1) 201 : cluster [DBG] 9.15 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 1523712 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:35.976306+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972199 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 1507328 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.094460487s of 11.182248116s, submitted: 14
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:36.976671+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:06.408926+0000 osd.1 (osd.1) 202 : cluster [DBG] 9.14 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:06.447769+0000 osd.1 (osd.1) 203 : cluster [DBG] 9.14 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 1499136 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 203)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:06.408926+0000 osd.1 (osd.1) 202 : cluster [DBG] 9.14 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:06.447769+0000 osd.1 (osd.1) 203 : cluster [DBG] 9.14 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:37.976924+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 1499136 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:38.977167+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 1490944 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:39.977388+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 1490944 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:40.977596+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 205 sent 203 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:10.382594+0000 osd.1 (osd.1) 204 : cluster [DBG] 9.10 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:10.400226+0000 osd.1 (osd.1) 205 : cluster [DBG] 9.10 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977025 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81264640 unmapped: 1482752 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 205)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:10.382594+0000 osd.1 (osd.1) 204 : cluster [DBG] 9.10 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:10.400226+0000 osd.1 (osd.1) 205 : cluster [DBG] 9.10 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:41.977780+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81264640 unmapped: 1482752 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:42.977932+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 207 sent 205 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:12.350068+0000 osd.1 (osd.1) 206 : cluster [DBG] 9.12 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:12.378320+0000 osd.1 (osd.1) 207 : cluster [DBG] 9.12 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 1474560 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 207)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:12.350068+0000 osd.1 (osd.1) 206 : cluster [DBG] 9.12 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:12.378320+0000 osd.1 (osd.1) 207 : cluster [DBG] 9.12 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:43.978101+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81281024 unmapped: 1466368 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:44.978256+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 209 sent 207 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:14.267975+0000 osd.1 (osd.1) 208 : cluster [DBG] 9.2 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:14.310152+0000 osd.1 (osd.1) 209 : cluster [DBG] 9.2 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81281024 unmapped: 1466368 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 209)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:14.267975+0000 osd.1 (osd.1) 208 : cluster [DBG] 9.2 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:14.310152+0000 osd.1 (osd.1) 209 : cluster [DBG] 9.2 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:45.978493+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981849 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 1458176 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:46.978644+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 211 sent 209 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:16.259116+0000 osd.1 (osd.1) 210 : cluster [DBG] 9.0 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:16.308524+0000 osd.1 (osd.1) 211 : cluster [DBG] 9.0 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 1449984 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 211)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:16.259116+0000 osd.1 (osd.1) 210 : cluster [DBG] 9.0 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:16.308524+0000 osd.1 (osd.1) 211 : cluster [DBG] 9.0 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:47.978795+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 1449984 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:48.978928+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.a scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.739817619s of 12.821089745s, submitted: 10
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.a scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 1425408 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:49.979076+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 213 sent 211 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:19.229962+0000 osd.1 (osd.1) 212 : cluster [DBG] 9.a scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:19.272305+0000 osd.1 (osd.1) 213 : cluster [DBG] 9.a scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 1417216 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:50.979249+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 213)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:19.229962+0000 osd.1 (osd.1) 212 : cluster [DBG] 9.a scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:19.272305+0000 osd.1 (osd.1) 213 : cluster [DBG] 9.a scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989082 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 1417216 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:51.979452+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 215 sent 213 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:21.184045+0000 osd.1 (osd.1) 214 : cluster [DBG] 9.4 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:21.229823+0000 osd.1 (osd.1) 215 : cluster [DBG] 9.4 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 215)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:21.184045+0000 osd.1 (osd.1) 214 : cluster [DBG] 9.4 scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:21.229823+0000 osd.1 (osd.1) 215 : cluster [DBG] 9.4 scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 1417216 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:52.979638+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 1400832 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:53.979851+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 217 sent 215 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:23.212951+0000 osd.1 (osd.1) 216 : cluster [DBG] 9.1a scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:23.237685+0000 osd.1 (osd.1) 217 : cluster [DBG] 9.1a scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 1400832 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 217)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:23.212951+0000 osd.1 (osd.1) 216 : cluster [DBG] 9.1a scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:23.237685+0000 osd.1 (osd.1) 217 : cluster [DBG] 9.1a scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:54.980079+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  log_queue is 2 last_log 219 sent 217 num 2 unsent 2 sending 2
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:24.256956+0000 osd.1 (osd.1) 218 : cluster [DBG] 9.1f scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  will send 2026-02-01T14:57:24.285215+0000 osd.1 (osd.1) 219 : cluster [DBG] 9.1f scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 1384448 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client handle_log_ack log(last 219)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:24.256956+0000 osd.1 (osd.1) 218 : cluster [DBG] 9.1f scrub starts
Feb 01 15:23:55 compute-0 ceph-osd[87011]: log_client  logged 2026-02-01T14:57:24.285215+0000 osd.1 (osd.1) 219 : cluster [DBG] 9.1f scrub ok
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:55.980289+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 1376256 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:56.980421+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 1376256 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:57.980576+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 1368064 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:58.980791+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 1368064 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:59.980936+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 1359872 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:00.981099+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 1359872 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:01.981245+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 1359872 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:02.981455+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1351680 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:03.981781+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1351680 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:04.981919+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 1351680 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:05.982029+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 1343488 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:06.982154+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 1343488 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:07.982321+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 1335296 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:08.982506+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 1335296 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:09.983102+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 1335296 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:10.983332+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 1318912 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:11.983472+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 1318912 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:12.983633+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 1310720 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:13.983790+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 1310720 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:14.983942+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 1302528 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:15.984086+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 1302528 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:16.984207+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 1302528 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:17.984348+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 1294336 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:18.984516+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 1294336 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:19.984650+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 1286144 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:20.984767+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 1286144 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:21.984895+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 1277952 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:22.985046+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 1277952 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:23.985254+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 1277952 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:24.985408+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 1261568 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:25.985558+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 1261568 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:26.985799+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 1253376 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:27.985976+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 1253376 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:28.986187+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 1245184 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:29.986322+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81510400 unmapped: 1236992 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:30.986475+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81510400 unmapped: 1236992 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:31.986608+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1228800 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:32.986795+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1228800 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:33.986983+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 1228800 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:34.987137+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 1212416 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:35.987284+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 1204224 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:36.987538+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 1196032 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:37.987715+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 1196032 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:38.987924+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 1187840 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:39.988076+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 1187840 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:40.988199+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 1187840 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:41.988353+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1179648 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:42.988572+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 1179648 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:43.988704+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1171456 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:44.988830+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 1171456 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:45.989026+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1163264 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:46.989232+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1163264 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:47.989461+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 1163264 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:48.989630+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1155072 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:49.989791+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 1146880 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:50.989930+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 1146880 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:51.990108+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1138688 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:52.990252+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 1138688 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:53.990350+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1130496 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:54.990537+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1130496 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:55.990690+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1130496 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:56.990844+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81625088 unmapped: 1122304 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:57.990977+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81625088 unmapped: 1122304 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:58.991482+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1114112 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:59.991599+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 1114112 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:00.991755+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 1097728 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:01.991901+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 1097728 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:02.992049+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 1097728 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:03.992190+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 1089536 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:04.992378+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 1089536 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:05.992586+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1081344 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:06.992791+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1081344 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:07.992897+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 1081344 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:08.993081+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81674240 unmapped: 1073152 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:09.993249+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81674240 unmapped: 1073152 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:10.993382+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 1064960 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:11.993535+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 1064960 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:12.993670+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 1064960 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:13.993810+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 1056768 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:14.994035+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 1056768 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:15.994224+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 1048576 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:16.994412+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 1048576 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:17.994573+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1040384 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:18.994741+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 1040384 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:19.994865+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81715200 unmapped: 1032192 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:20.994986+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 1024000 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:21.995123+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 1024000 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:22.995260+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 1015808 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:23.995368+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 1007616 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:24.995497+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 1007616 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:25.995736+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 999424 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:26.995955+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 999424 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:27.996086+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 999424 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:28.996192+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 991232 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:29.996351+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 991232 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:30.996480+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 983040 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:31.996630+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 983040 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:32.996776+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 974848 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:33.996920+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 974848 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:34.997097+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 974848 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:35.997334+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 966656 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:36.997584+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 966656 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:37.997707+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 958464 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:38.997883+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 958464 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:39.998061+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 950272 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:40.998269+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 950272 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:41.998456+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 950272 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:42.998672+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 942080 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:43.998879+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 942080 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:44.999026+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 925696 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:45.999253+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 925696 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:46.999459+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 917504 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:47.999607+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 917504 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:48.999813+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 917504 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:49.999939+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 909312 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:51.000146+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 909312 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:52.000351+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 901120 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:53.000645+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 901120 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:54.000796+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 901120 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:55.000950+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 892928 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:56.001125+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 892928 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:57.001324+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 884736 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:58.001719+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 884736 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:59.001911+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 884736 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:00.002050+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 876544 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:01.002187+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 876544 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:02.002327+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 876544 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:03.002475+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 868352 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:04.002671+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 868352 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:05.002824+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 851968 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:06.003017+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 851968 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:07.003187+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 843776 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:08.003382+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 843776 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:09.003577+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 843776 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:10.003719+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 835584 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:11.003855+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 835584 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:12.003997+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 827392 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:13.004107+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 827392 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:14.004227+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 819200 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:15.004416+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 819200 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:16.004556+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 819200 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:17.004676+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 811008 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:18.004854+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 811008 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:19.005019+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 802816 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:20.005197+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 802816 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:21.005377+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 794624 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:22.005506+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 794624 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:23.005675+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 794624 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:24.005835+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 786432 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:25.006009+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 786432 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:26.006164+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 786432 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:27.006280+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 778240 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:28.006425+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 778240 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:29.006558+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 770048 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:30.006710+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 761856 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:31.006825+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 753664 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:32.006980+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 753664 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:33.007114+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 745472 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:34.007283+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 745472 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:35.007491+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 745472 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:36.007602+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 745472 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:37.007771+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 737280 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:38.007918+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 737280 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:39.008135+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 720896 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:40.008374+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 720896 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:41.008568+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 712704 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:42.008845+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 712704 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:43.009005+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 704512 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:44.009160+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 704512 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:45.009349+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 696320 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:46.009478+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 696320 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:47.009633+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 696320 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:48.009746+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 696320 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:49.009921+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 696320 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:50.010108+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 696320 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:51.010242+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 696320 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:52.010457+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 688128 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:53.010623+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 688128 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:54.010773+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 688128 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:55.010930+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 679936 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:56.011075+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 679936 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:57.011227+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 671744 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:58.011365+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 671744 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:59.011550+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 663552 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:00.011726+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 663552 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:01.011941+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 663552 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:02.012173+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 655360 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:03.012388+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 655360 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:04.012556+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 655360 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:05.012670+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 647168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:06.012808+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 647168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:07.012937+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 638976 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:08.013125+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 638976 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:09.013399+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 630784 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:10.013537+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 630784 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:11.013664+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 630784 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:12.013792+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 622592 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:13.013966+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 622592 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:14.014150+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 622592 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:15.014378+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 614400 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:16.014507+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 614400 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:17.014666+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 606208 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:18.726842+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 606208 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:19.726995+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 598016 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:20.727102+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 598016 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:21.727528+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 598016 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:22.727661+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 589824 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:23.727808+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 589824 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:24.727959+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 581632 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:25.728073+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 581632 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:26.728230+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 581632 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:27.728393+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 589824 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:28.728538+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 589824 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:29.728747+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 581632 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:30.728860+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 581632 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:31.728994+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 573440 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:32.729183+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 573440 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:33.729368+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 573440 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:34.729522+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 6923 writes, 28K keys, 6923 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6923 writes, 1318 syncs, 5.25 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6923 writes, 28K keys, 6923 commit groups, 1.0 writes per commit group, ingest: 19.77 MB, 0.03 MB/s
                                           Interval WAL: 6923 writes, 1318 syncs, 5.25 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 507904 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:35.729638+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 507904 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:36.729753+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 499712 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:37.730080+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 499712 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:38.730331+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 499712 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:39.730593+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 491520 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:40.730764+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 491520 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:41.730904+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 483328 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:42.731077+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 466944 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:43.731246+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 466944 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:44.731377+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 466944 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:45.731559+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 458752 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:46.731711+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 458752 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:47.731877+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 450560 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:48.731976+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 442368 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:49.732095+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 434176 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:50.732340+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 434176 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:51.732485+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 425984 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:52.732627+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 425984 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:53.732829+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 425984 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:54.733013+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 417792 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:55.733213+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 417792 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:56.733562+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 417792 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:57.733703+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 409600 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:58.733921+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 409600 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:59.734075+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 393216 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:00.734218+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 393216 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:01.734356+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 393216 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:02.734479+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 385024 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:03.734602+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 385024 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:04.734721+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 376832 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:05.734838+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 376832 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:06.734959+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 376832 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:07.735080+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 368640 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:08.735208+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 368640 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:09.735357+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 368640 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:10.735457+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 360448 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:11.735557+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 360448 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:12.735678+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 352256 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:13.735801+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 352256 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:14.735943+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 344064 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:15.736111+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 344064 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:16.736257+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 344064 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:17.736424+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 335872 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:18.736588+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 335872 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:19.736764+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 327680 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:20.736906+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 327680 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:21.737032+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 327680 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:22.737131+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 319488 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:23.737273+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 319488 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:24.737475+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 319488 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:25.737651+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 311296 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:26.737999+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 277.726654053s of 277.740570068s, submitted: 8
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 1359872 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:27.738164+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:28.738292+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:29.738496+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:30.738619+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:31.738745+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:32.738845+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:33.738988+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:34.739129+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:35.739288+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:36.739481+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:37.739660+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:38.739783+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 442368 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:39.739926+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 434176 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:40.740038+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 434176 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:41.740193+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 425984 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:42.740391+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 425984 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:43.740531+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 417792 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:44.740659+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 417792 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:45.740838+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 417792 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:46.740982+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 409600 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:47.741177+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 409600 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:48.742881+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 401408 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:49.743056+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 393216 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:50.743218+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 393216 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:51.743399+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 385024 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:52.743531+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 385024 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:53.743668+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 376832 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:54.743862+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 376832 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:55.744045+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 376832 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:56.744320+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 368640 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:57.744467+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 368640 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:58.744805+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 360448 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:59.745393+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 360448 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:00.745572+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 360448 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:01.745744+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 352256 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:02.745876+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 352256 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:03.746338+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:04.746482+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:05.746820+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:06.746974+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 335872 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:07.747125+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 335872 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:08.747272+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 327680 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:09.747508+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 327680 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:10.747639+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 327680 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:11.747797+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 319488 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:12.747983+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 319488 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:13.748108+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 311296 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:14.748223+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 311296 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:15.748351+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 303104 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:16.748471+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2811723650' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 303104 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:17.748582+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 303104 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:18.748714+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 294912 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:19.749112+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 294912 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:20.749250+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 286720 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:21.749514+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 286720 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:22.749652+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 278528 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:23.749814+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 278528 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:24.749957+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 278528 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:25.750121+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 270336 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:26.750327+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 270336 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:27.750521+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 262144 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:28.750706+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 262144 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:29.750893+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 262144 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:30.751043+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 253952 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:31.751178+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 253952 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:32.751345+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:33.751489+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:34.751605+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:35.751783+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:36.751935+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:37.752072+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:38.752198+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:39.752356+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:40.752532+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:41.752715+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:42.752856+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:43.752965+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:44.753081+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:45.753288+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:46.753449+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:47.753592+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:48.753743+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:49.753936+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:50.754079+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:51.754259+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:52.754383+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:53.754557+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:54.754708+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:55.754853+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:56.755428+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:57.755567+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:58.755746+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:59.755893+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:00.756199+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:01.756366+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:02.756748+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:03.756961+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:04.757138+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:05.757312+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:06.757510+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:07.757889+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:08.758019+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:09.758203+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:10.758341+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:11.758473+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:12.758609+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:13.758793+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:14.758924+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:15.759070+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:16.759319+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:17.759488+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:18.759627+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:19.759780+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:20.759910+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:21.760046+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:22.760212+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:23.760499+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:24.760670+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:25.760844+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:26.760972+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:27.761159+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:28.761335+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:29.761520+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:30.761698+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:31.761825+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:32.761960+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:33.762112+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:34.762244+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:35.762398+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:36.762548+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:37.762754+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:38.762915+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:39.763060+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:40.763188+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:41.763324+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:42.763446+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:43.763624+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:44.763806+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:45.764074+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:46.764427+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:47.764652+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:48.764791+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:49.765067+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:50.765229+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:51.765350+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:52.765493+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:53.765646+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:54.765782+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:55.765950+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:56.766075+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:57.766231+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:58.766381+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:59.766534+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:00.766700+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:01.767033+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:02.767177+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:03.767376+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 212992 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:04.767555+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 204800 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:05.767754+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 204800 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:06.767952+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 204800 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:07.768091+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 204800 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:08.768245+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 204800 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:09.768434+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 204800 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:10.768563+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 204800 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:11.768679+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 204800 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:12.768796+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 204800 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:13.768973+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 204800 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:14.769183+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 204800 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:15.769374+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:16.769572+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:17.769777+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:18.769913+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:19.770088+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:20.770209+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:21.770354+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:22.770467+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:23.770635+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:24.770777+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:25.770890+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:26.771037+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:27.771145+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:28.771273+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:29.771451+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:30.771605+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:31.771773+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 196608 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:32.771917+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:33.772059+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:34.772176+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:35.772289+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:36.772423+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:37.772555+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:38.772684+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:39.772813+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:40.772941+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:41.773256+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:42.773359+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:43.773515+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:44.773653+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:45.774003+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:46.774152+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:47.774278+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:48.774474+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:49.774635+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:50.774754+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:51.774910+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:52.775030+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:53.775146+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:54.775266+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:55.775419+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:56.775516+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:57.775651+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:58.775793+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:59.775991+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:00.776114+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:01.776266+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:02.776466+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:03.776612+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:04.776923+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:05.777133+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:06.777288+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:07.777462+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:08.777580+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:09.777713+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:10.777813+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:11.777937+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:12.778142+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:13.778319+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:14.778579+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:15.778770+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:16.778889+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:17.779074+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:18.779278+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:19.779647+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:20.779744+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:21.779958+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:22.780238+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:23.780383+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:24.780479+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:25.780587+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:26.780828+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:27.780922+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:28.781035+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:29.781150+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:30.781241+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:31.781469+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:32.781599+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:33.781679+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:34.781801+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:35.781958+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:36.782102+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:37.782337+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:38.782511+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:39.782696+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 344064 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc ms_handle_reset ms_handle_reset con 0x55a03c608000
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3695062931
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: get_auth_request con 0x55a03d19dc00 auth_method 0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_configure stats_period=5
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:40.782837+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 ms_handle_reset con 0x55a03c609800 session 0x55a03d0eafc0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: handle_auth_request added challenge on 0x55a03d146000
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 ms_handle_reset con 0x55a03d146400 session 0x55a03d0eac40
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: handle_auth_request added challenge on 0x55a03c609800
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:41.783128+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:42.783282+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:43.783450+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:44.783616+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:45.783761+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:46.783888+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:47.783997+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:48.784119+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:49.784196+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:50.784302+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:51.784392+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:52.784518+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:53.784672+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:54.784825+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:55.785000+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:56.785142+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:57.785338+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:58.785467+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:59.785615+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:00.785778+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:01.785921+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:02.786089+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:03.786229+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:04.786388+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:05.786522+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:06.786649+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:07.786789+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:08.786903+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:09.787036+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:10.787161+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:11.787274+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:12.787407+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:13.787629+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:14.787772+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:15.787911+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:16.788095+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:17.788241+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:18.969550+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:19.969746+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:20.969930+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:21.970205+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:22.970370+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:23.970680+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:24.970836+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:25.971087+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:26.971241+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.872650146s of 300.119934082s, submitted: 90
Feb 01 15:23:55 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:27.971444+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:28.971595+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:29.971786+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:30.971905+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:31.972064+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:32.972244+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:33.972364+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:34.972499+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:35.972662+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:36.972796+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:37.972959+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:38.973103+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:39.973331+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:40.973430+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:41.973567+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:42.973707+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:43.973874+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:44.974091+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:45.974286+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:46.974506+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:47.974710+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:48.974836+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:49.975056+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:50.975184+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:51.975373+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:52.975558+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:53.975753+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:54.975944+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:55.976108+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:56.976270+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:57.976423+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:58.976562+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:59.976730+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:00.976860+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:01.977058+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:02.977249+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:03.977417+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:04.977600+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:05.977948+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:06.978129+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:07.978358+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:08.978475+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:09.978660+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:11.003062+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:12.003269+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:13.003455+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:14.003799+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:15.003924+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:16.004104+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:17.004341+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:18.004523+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:19.004693+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:20.004887+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:21.005055+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:22.005188+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:23.005324+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:24.005449+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:25.005581+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:26.005711+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 139264 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:27.005860+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:28.006020+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:29.006162+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:30.006370+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:31.006533+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:32.006726+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:33.006906+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:34.007095+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:35.007266+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:36.007448+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:37.007594+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:38.007731+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:39.007874+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:40.008026+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:41.008171+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:42.008353+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:43.008487+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:44.008672+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:45.008829+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:46.009025+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:47.009218+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:48.009428+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:49.009583+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:50.009824+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:51.010001+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:52.010216+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:53.010377+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:54.010564+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:55.010701+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:56.010880+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:57.011159+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:58.011391+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 131072 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:59.011538+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:00.011861+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:01.012115+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:02.012406+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:03.012568+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:04.012741+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:05.012930+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:06.013117+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:07.013277+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:08.013500+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:09.013648+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:10.013828+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:11.013957+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:12.014089+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:13.014230+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:14.014385+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:15.014542+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:16.014879+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:17.015029+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:18.015199+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:19.015375+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 122880 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:20.015577+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:21.015800+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:22.016000+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:23.016166+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:24.016371+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:25.016555+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:26.016741+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:27.016879+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:28.017047+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:29.017189+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:30.017386+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:31.017541+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:32.017675+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:33.017845+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:34.018035+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:35.018216+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:36.018449+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 114688 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:37.018602+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 262144 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:38.018767+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 262144 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:39.018910+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 262144 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:40.019095+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 262144 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:41.019237+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 262144 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:42.019502+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 262144 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:43.019679+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 262144 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:44.019845+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 262144 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:45.019994+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 262144 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:46.020133+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 262144 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:47.020276+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 262144 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:48.020453+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 262144 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:49.020572+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 253952 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:50.020745+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 253952 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:51.020885+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 253952 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:52.021044+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 253952 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:53.021231+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 253952 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:54.021374+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 253952 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:55.021555+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 253952 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:56.021721+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 253952 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:57.021872+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:58.022014+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:59.022164+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:00.022340+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:01.022475+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:02.022590+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:03.022724+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:04.022852+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:05.022938+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:06.023200+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:07.023333+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:08.023422+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:09.023589+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:10.023717+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:11.023932+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:12.024105+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:13.024260+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:14.024417+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:15.024585+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:16.024784+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:17.024916+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:18.025124+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:19.025249+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:20.025486+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:21.025615+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:22.025718+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:23.025891+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:24.026047+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:25.026194+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:26.026365+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:27.026532+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:28.026676+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:29.026820+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:30.027007+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:31.027168+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:32.027338+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:33.027512+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:34.027656+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:35.027835+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:36.027990+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:37.028159+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:38.028419+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:39.028540+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:40.028841+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 245760 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:41.028989+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:42.029061+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:43.029229+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:44.029389+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:45.029590+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:46.029708+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:47.029875+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:48.030009+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:49.030132+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:50.030435+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:51.030571+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:52.032580+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:53.033142+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:54.033535+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:55.034770+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:56.036959+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 237568 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:57.038787+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:58.039723+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:59.039919+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:00.041123+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:01.041542+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:02.042384+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:03.042882+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:04.043085+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:05.043821+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:06.044292+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:07.044971+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:08.045291+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:09.045527+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:10.045830+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:11.046039+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:12.046188+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:13.046345+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:14.046520+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:15.046699+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:16.047055+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:17.047366+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:18.047567+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:19.047773+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:20.047946+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:21.048127+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:22.048372+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 229376 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:23.048764+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:24.049050+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:25.049285+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:26.049574+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:27.049899+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:28.050141+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:29.050393+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:30.050563+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:31.050722+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:32.050855+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:33.050987+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:34.051093+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 221184 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 7147 writes, 29K keys, 7147 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7147 writes, 1430 syncs, 5.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a03a94b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:35.051351+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:36.051502+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:37.051739+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:38.051918+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:39.052259+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:40.052520+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:41.052663+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:42.052846+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:43.053016+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:44.053265+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:45.053428+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:46.053562+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:47.053745+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 188416 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:48.054071+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:49.054233+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:50.054554+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:51.054766+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:52.054930+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:53.055066+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:54.055188+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:55.055342+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:56.055477+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:57.055762+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:58.055988+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:59.056173+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:00.056427+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:01.056567+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:02.056728+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:03.056949+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:04.057122+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:05.057284+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:06.057464+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:07.057614+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:08.057813+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:09.057956+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:10.058118+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:11.058375+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:12.058528+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:13.058644+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:14.058811+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:15.059064+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:16.059208+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:17.059337+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:18.059487+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 180224 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:19.059664+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 172032 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:20.059815+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 172032 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:21.059979+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 172032 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:22.060111+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 172032 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:23.060343+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 172032 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:24.060510+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 172032 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:25.060662+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 172032 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:26.060818+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 172032 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.934509277s of 299.964233398s, submitted: 22
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:27.060989+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 163840 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:28.061118+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 155648 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:29.061336+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:30.061651+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:31.061943+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:32.062113+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:33.062368+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:34.062541+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:35.062810+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:36.062964+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:37.063150+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:38.063367+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:39.063524+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:40.063713+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:41.063908+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:42.064123+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:43.064264+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:44.064395+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:45.064540+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:46.064753+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:47.064921+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:48.065065+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:49.065215+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:50.065428+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:51.065570+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:52.065701+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:53.065810+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:54.066012+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:55.066160+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:56.066377+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:57.066562+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:58.066683+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:59.066814+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:00.066940+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:01.067066+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:02.067205+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:03.067415+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:04.067655+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:05.067864+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:06.068114+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:07.068258+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:08.068468+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:09.068584+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993908 data_alloc: 218103808 data_used: 7340
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:10.068833+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 147456 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: handle_auth_request added challenge on 0x55a03cfeb000
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:11.069022+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 1015808 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fce48000/0x0/0x4ffc00000, data 0x1284e3/0x1e4000, compress 0x0/0x0/0x0, omap 0x12f7c, meta 0x2bbd084), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 120 handle_osd_map epochs [120,121], i have 121, src has [1,121]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 44.184757233s of 44.368446350s, submitted: 90
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:12.069173+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 2031616 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:13.069387+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 11247616 heap: 95207424 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 123 ms_handle_reset con 0x55a03cfeb000 session 0x55a03d13d880
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: handle_auth_request added challenge on 0x55a03c61dc00
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:14.069502+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 11075584 heap: 95207424 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072775 data_alloc: 218103808 data_used: 8538
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:15.069634+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 19283968 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 124 ms_handle_reset con 0x55a03c61dc00 session 0x55a03cc7ce00
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:16.069874+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 19275776 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fc1c7000/0x0/0x4ffc00000, data 0xd9f438/0xe63000, compress 0x0/0x0/0x0, omap 0x13998, meta 0x2bbc668), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:17.069985+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 19275776 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:18.070193+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 19275776 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:19.070354+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 19275776 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076213 data_alloc: 218103808 data_used: 8538
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:20.070557+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 19275776 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:21.070745+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 19275776 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fc1c4000/0x0/0x4ffc00000, data 0xda0eb7/0xe66000, compress 0x0/0x0/0x0, omap 0x13c16, meta 0x2bbc3ea), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:22.070912+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 19275776 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:23.071058+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 19275776 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fc1c4000/0x0/0x4ffc00000, data 0xda0eb7/0xe66000, compress 0x0/0x0/0x0, omap 0x13c16, meta 0x2bbc3ea), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:24.071232+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 19275776 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078699 data_alloc: 218103808 data_used: 8538
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:25.071445+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 19275776 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fc1c4000/0x0/0x4ffc00000, data 0xda0eb7/0xe66000, compress 0x0/0x0/0x0, omap 0x13c16, meta 0x2bbc3ea), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:26.071697+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 19275776 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:27.071919+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 19275776 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fc1c4000/0x0/0x4ffc00000, data 0xda0eb7/0xe66000, compress 0x0/0x0/0x0, omap 0x13c16, meta 0x2bbc3ea), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:28.072081+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 19275776 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:29.072220+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 19275776 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078699 data_alloc: 218103808 data_used: 8538
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:30.072403+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 19275776 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:31.072550+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 19275776 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fc1c4000/0x0/0x4ffc00000, data 0xda0eb7/0xe66000, compress 0x0/0x0/0x0, omap 0x13c16, meta 0x2bbc3ea), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:32.072731+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 19275776 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:33.072901+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 19275776 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fc1c4000/0x0/0x4ffc00000, data 0xda0eb7/0xe66000, compress 0x0/0x0/0x0, omap 0x13c16, meta 0x2bbc3ea), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:34.073037+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 19275776 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078699 data_alloc: 218103808 data_used: 8538
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:35.073197+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 19275776 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:36.073380+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 19275776 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:37.073546+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: handle_auth_request added challenge on 0x55a03c61d800
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.560483932s of 25.746004105s, submitted: 48
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 19038208 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:38.073709+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 17752064 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: handle_auth_request added challenge on 0x55a03c61c000
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fc1c1000/0x0/0x4ffc00000, data 0xda65da/0xe6b000, compress 0x0/0x0/0x0, omap 0x13ed8, meta 0x2bbc128), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_map Got map version 10
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:39.073857+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 17481728 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081933 data_alloc: 218103808 data_used: 8538
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:40.074069+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 17481728 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:41.074215+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 17383424 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:42.074374+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 86335488 unmapped: 17268736 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:43.074580+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 17260544 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fc1ad000/0x0/0x4ffc00000, data 0xdb9539/0xe7f000, compress 0x0/0x0/0x0, omap 0x14813, meta 0x2bbb7ed), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:44.074812+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 16187392 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082573 data_alloc: 218103808 data_used: 8538
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:45.074940+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 16113664 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fc1a8000/0x0/0x4ffc00000, data 0xdbe324/0xe84000, compress 0x0/0x0/0x0, omap 0x1486b, meta 0x2bbb795), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:46.075090+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 87564288 unmapped: 16039936 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_map Got map version 11
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:47.075239+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.879360199s of 10.002609253s, submitted: 54
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 87506944 unmapped: 16097280 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:48.075388+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 87605248 unmapped: 15998976 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:49.075569+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 15974400 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089755 data_alloc: 218103808 data_used: 8538
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:50.075769+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 87695360 unmapped: 15908864 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:51.075915+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 16162816 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fc17b000/0x0/0x4ffc00000, data 0xde8e50/0xeb1000, compress 0x0/0x0/0x0, omap 0x14fc6, meta 0x2bbb03a), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:52.076139+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 16162816 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fc179000/0x0/0x4ffc00000, data 0xdea308/0xeb3000, compress 0x0/0x0/0x0, omap 0x15410, meta 0x2bbabf0), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:53.076343+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 16154624 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:54.076480+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 16023552 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093109 data_alloc: 218103808 data_used: 8538
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:55.076599+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 15982592 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:56.076741+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 87605248 unmapped: 15998976 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:57.076895+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.929249763s of 10.088311195s, submitted: 84
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 15745024 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:58.077013+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 15745024 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fc162000/0x0/0x4ffc00000, data 0xe02748/0xeca000, compress 0x0/0x0/0x0, omap 0x15c2c, meta 0x2bba3d4), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:59.077146+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 15679488 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fc15d000/0x0/0x4ffc00000, data 0xe085a1/0xecf000, compress 0x0/0x0/0x0, omap 0x15dee, meta 0x2bba212), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096063 data_alloc: 218103808 data_used: 8538
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:00.077345+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 87998464 unmapped: 15605760 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:01.077458+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 126 handle_osd_map epochs [126,127], i have 127, src has [1,127]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 87998464 unmapped: 15605760 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:02.077569+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 88014848 unmapped: 15589376 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:03.077727+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 88014848 unmapped: 15589376 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:04.077912+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 88047616 unmapped: 15556608 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fc151000/0x0/0x4ffc00000, data 0xe12b90/0xedb000, compress 0x0/0x0/0x0, omap 0x16516, meta 0x2bb9aea), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101531 data_alloc: 218103808 data_used: 8538
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:05.078031+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 88047616 unmapped: 15556608 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:06.078176+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 90185728 unmapped: 13418496 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:07.078288+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 90202112 unmapped: 13402112 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.353428841s of 10.494646072s, submitted: 58
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:08.078447+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 90267648 unmapped: 13336576 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 127 heartbeat osd_stat(store_statfs(0x4faf96000/0x0/0x4ffc00000, data 0xe2b577/0xef6000, compress 0x0/0x0/0x0, omap 0x169b1, meta 0x3d5964f), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:09.078586+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 90324992 unmapped: 13279232 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102433 data_alloc: 218103808 data_used: 8538
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:10.078761+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 90333184 unmapped: 13271040 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:11.078924+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 90333184 unmapped: 13271040 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:12.079036+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 90472448 unmapped: 13131776 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 127 heartbeat osd_stat(store_statfs(0x4faf7e000/0x0/0x4ffc00000, data 0xe42d00/0xf0e000, compress 0x0/0x0/0x0, omap 0x173a7, meta 0x3d58c59), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:13.079178+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 90472448 unmapped: 13131776 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:14.079350+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 90537984 unmapped: 13066240 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 127 heartbeat osd_stat(store_statfs(0x4faf70000/0x0/0x4ffc00000, data 0xe4f4ec/0xf1c000, compress 0x0/0x0/0x0, omap 0x1751e, meta 0x3d58ae2), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109655 data_alloc: 218103808 data_used: 8538
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:15.079484+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 90619904 unmapped: 12984320 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:16.079691+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 90644480 unmapped: 12959744 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:17.079816+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 90644480 unmapped: 12959744 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:18.079937+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 90644480 unmapped: 12959744 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.522821426s of 10.699725151s, submitted: 67
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 127 heartbeat osd_stat(store_statfs(0x4faf5b000/0x0/0x4ffc00000, data 0xe6517b/0xf31000, compress 0x0/0x0/0x0, omap 0x17d07, meta 0x3d582f9), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:19.080063+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 90824704 unmapped: 12779520 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112081 data_alloc: 218103808 data_used: 8538
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:20.080258+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 90931200 unmapped: 12673024 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:21.080333+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 91004928 unmapped: 12599296 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:22.080505+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 91004928 unmapped: 12599296 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:23.080612+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 91004928 unmapped: 12599296 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 128 heartbeat osd_stat(store_statfs(0x4faf3d000/0x0/0x4ffc00000, data 0xe84443/0xf4f000, compress 0x0/0x0/0x0, omap 0x17e61, meta 0x3d5819f), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:24.080749+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 91004928 unmapped: 12599296 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112193 data_alloc: 218103808 data_used: 8538
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:25.080930+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 12550144 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:26.081055+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 91078656 unmapped: 12525568 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:27.081193+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 91078656 unmapped: 12525568 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:28.081327+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 91095040 unmapped: 12509184 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.708472252s of 10.002384186s, submitted: 108
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:29.081484+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 91095040 unmapped: 12509184 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 130 heartbeat osd_stat(store_statfs(0x4faf11000/0x0/0x4ffc00000, data 0xeabcc7/0xf79000, compress 0x0/0x0/0x0, omap 0x182ad, meta 0x3d57d53), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119505 data_alloc: 218103808 data_used: 8538
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:30.081688+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 90963968 unmapped: 12640256 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:31.081819+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 92045312 unmapped: 11558912 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:32.081947+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 92045312 unmapped: 11558912 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:33.082152+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 92135424 unmapped: 11468800 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 130 heartbeat osd_stat(store_statfs(0x4faefc000/0x0/0x4ffc00000, data 0xec2ff6/0xf90000, compress 0x0/0x0/0x0, omap 0x19195, meta 0x3d56e6b), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:34.082355+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 92299264 unmapped: 11304960 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 130 heartbeat osd_stat(store_statfs(0x4faefc000/0x0/0x4ffc00000, data 0xec305b/0xf90000, compress 0x0/0x0/0x0, omap 0x19225, meta 0x3d56ddb), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120047 data_alloc: 218103808 data_used: 8538
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:35.082542+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 92299264 unmapped: 11304960 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:36.082681+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 92364800 unmapped: 11239424 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:37.082852+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 92364800 unmapped: 11239424 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 131 heartbeat osd_stat(store_statfs(0x4faecb000/0x0/0x4ffc00000, data 0xef1646/0xfbf000, compress 0x0/0x0/0x0, omap 0x199af, meta 0x3d56651), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 131 heartbeat osd_stat(store_statfs(0x4faebb000/0x0/0x4ffc00000, data 0xf01105/0xfcf000, compress 0x0/0x0/0x0, omap 0x199af, meta 0x3d56651), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:38.082998+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 92430336 unmapped: 11173888 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.895339012s of 10.002302170s, submitted: 65
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:39.083214+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 131 handle_osd_map epochs [131,132], i have 132, src has [1,132]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 93782016 unmapped: 9822208 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134811 data_alloc: 218103808 data_used: 8538
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:40.083435+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 94085120 unmapped: 9519104 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:41.083636+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 94085120 unmapped: 9519104 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:42.083814+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 9306112 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:43.084028+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 94003200 unmapped: 9601024 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fae6c000/0x0/0x4ffc00000, data 0xf50b9c/0x1020000, compress 0x0/0x0/0x0, omap 0x1a80d, meta 0x3d557f3), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:44.084193+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 94068736 unmapped: 9535488 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132747 data_alloc: 218103808 data_used: 8538
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:45.084359+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 94208000 unmapped: 9396224 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:46.084568+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 94978048 unmapped: 8626176 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:47.084800+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 8568832 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:48.085049+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.525683403s of 10.002911568s, submitted: 146
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 94576640 unmapped: 9027584 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:49.085265+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 94748672 unmapped: 8855552 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fae37000/0x0/0x4ffc00000, data 0xf810d5/0x1055000, compress 0x0/0x0/0x0, omap 0x1b188, meta 0x3d54e78), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146111 data_alloc: 218103808 data_used: 9460
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:50.085581+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 94904320 unmapped: 8699904 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:51.086045+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 95199232 unmapped: 8404992 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fae17000/0x0/0x4ffc00000, data 0xf9db14/0x1073000, compress 0x0/0x0/0x0, omap 0x1b672, meta 0x3d5498e), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:52.086181+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 95207424 unmapped: 8396800 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:53.086290+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 95461376 unmapped: 8142848 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:54.086469+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fade4000/0x0/0x4ffc00000, data 0xfd14b8/0x10a8000, compress 0x0/0x0/0x0, omap 0x1bc3d, meta 0x3d543c3), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 95723520 unmapped: 7880704 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:55.086634+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156755 data_alloc: 218103808 data_used: 9460
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 95723520 unmapped: 7880704 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:56.086826+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 95739904 unmapped: 7864320 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:57.087014+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 95805440 unmapped: 7798784 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:58.087229+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.799532890s of 10.002136230s, submitted: 138
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 7790592 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:59.087385+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fadc1000/0x0/0x4ffc00000, data 0xff2230/0x10cb000, compress 0x0/0x0/0x0, omap 0x1c356, meta 0x3d53caa), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 95879168 unmapped: 7725056 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:00.087780+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158757 data_alloc: 218103808 data_used: 9748
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 7847936 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:01.087929+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 139 handle_osd_map epochs [139,140], i have 140, src has [1,140]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 95772672 unmapped: 7831552 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fad9e000/0x0/0x4ffc00000, data 0x10121fe/0x10ec000, compress 0x0/0x0/0x0, omap 0x1c67d, meta 0x3d53983), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:02.088062+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 95739904 unmapped: 7864320 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:03.088191+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 8192000 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:04.088355+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 96501760 unmapped: 7102464 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fad9d000/0x0/0x4ffc00000, data 0x1015547/0x10ef000, compress 0x0/0x0/0x0, omap 0x1c9ab, meta 0x3d53655), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:05.088549+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162455 data_alloc: 218103808 data_used: 9748
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 96501760 unmapped: 7102464 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:06.088777+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 6963200 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:07.089003+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 6963200 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:08.089152+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fad7f000/0x0/0x4ffc00000, data 0x102f479/0x110b000, compress 0x0/0x0/0x0, omap 0x1ccbb, meta 0x3d53345), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 6963200 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:09.089356+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.261453629s of 10.650949478s, submitted: 52
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 6963200 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:10.089517+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170005 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 96837632 unmapped: 6766592 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:11.089670+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 96845824 unmapped: 6758400 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fad5b000/0x0/0x4ffc00000, data 0x10540d1/0x1131000, compress 0x0/0x0/0x0, omap 0x1d111, meta 0x3d52eef), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:12.089810+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 96944128 unmapped: 6660096 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:13.089952+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 97050624 unmapped: 6553600 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:14.090087+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 97099776 unmapped: 6504448 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:15.090332+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172227 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 6365184 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:16.090483+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 97255424 unmapped: 6348800 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fad33000/0x0/0x4ffc00000, data 0x1079dd6/0x1157000, compress 0x0/0x0/0x0, omap 0x1d644, meta 0x3d529bc), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:17.090699+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 97255424 unmapped: 6348800 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fad33000/0x0/0x4ffc00000, data 0x1079dd6/0x1157000, compress 0x0/0x0/0x0, omap 0x1d644, meta 0x3d529bc), peers [0,2] op hist [0,0,0,0,0,0,1])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:18.090909+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 97419264 unmapped: 6184960 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 97419264 unmapped: 6184960 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.189771652s of 10.683373451s, submitted: 61
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:19.799682+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fad10000/0x0/0x4ffc00000, data 0x109e8b0/0x117c000, compress 0x0/0x0/0x0, omap 0x1d972, meta 0x3d5268e), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177013 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 97501184 unmapped: 6103040 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fad06000/0x0/0x4ffc00000, data 0x10a851e/0x1186000, compress 0x0/0x0/0x0, omap 0x1d972, meta 0x3d5268e), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:20.799822+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 97222656 unmapped: 6381568 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:21.799923+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 98435072 unmapped: 5169152 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:22.800044+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 98435072 unmapped: 5169152 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:23.800141+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 97976320 unmapped: 5627904 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 142 heartbeat osd_stat(store_statfs(0x4face0000/0x0/0x4ffc00000, data 0x10cefdd/0x11ac000, compress 0x0/0x0/0x0, omap 0x1df3a, meta 0x3d520c6), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:24.800283+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178961 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 98140160 unmapped: 5464064 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:25.800466+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 5201920 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:26.800568+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 98091008 unmapped: 5513216 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:27.800772+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 98091008 unmapped: 5513216 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:28.800925+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 98189312 unmapped: 5414912 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:29.801038+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.815156937s of 10.002387047s, submitted: 59
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fac92000/0x0/0x4ffc00000, data 0x111ba41/0x11fa000, compress 0x0/0x0/0x0, omap 0x1e7e6, meta 0x3d5181a), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186289 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fac6e000/0x0/0x4ffc00000, data 0x113c8e5/0x121c000, compress 0x0/0x0/0x0, omap 0x1e9ec, meta 0x3d51614), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 98205696 unmapped: 5398528 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:30.801201+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 98369536 unmapped: 5234688 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:31.801407+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fac4c000/0x0/0x4ffc00000, data 0x11602c9/0x123e000, compress 0x0/0x0/0x0, omap 0x1edf8, meta 0x3d51208), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 98369536 unmapped: 5234688 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:32.801585+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 98000896 unmapped: 5603328 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:33.801842+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 98009088 unmapped: 5595136 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:34.801964+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190235 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 98230272 unmapped: 5373952 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:35.802135+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 98361344 unmapped: 5242880 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:36.802289+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 98369536 unmapped: 5234688 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:37.802440+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fabfe000/0x0/0x4ffc00000, data 0x11acfcc/0x128c000, compress 0x0/0x0/0x0, omap 0x1f3df, meta 0x3d50c21), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 98369536 unmapped: 5234688 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:38.802575+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 98385920 unmapped: 5218304 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:39.802652+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.567680359s of 10.001662254s, submitted: 115
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196267 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 99704832 unmapped: 3899392 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:40.802868+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fabe6000/0x0/0x4ffc00000, data 0x11c70c2/0x12a6000, compress 0x0/0x0/0x0, omap 0x1f455, meta 0x3d50bab), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fabe6000/0x0/0x4ffc00000, data 0x11c70c2/0x12a6000, compress 0x0/0x0/0x0, omap 0x1f654, meta 0x3d509ac), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 99704832 unmapped: 3899392 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:41.803351+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 3825664 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:42.803803+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 99459072 unmapped: 4145152 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:43.803998+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 99459072 unmapped: 4145152 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:44.804199+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197631 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 99786752 unmapped: 3817472 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:45.804507+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fabb2000/0x0/0x4ffc00000, data 0x11fc2ed/0x12da000, compress 0x0/0x0/0x0, omap 0x1fcd5, meta 0x3d5032b), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 99942400 unmapped: 3661824 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:46.804663+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 100024320 unmapped: 3579904 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:47.804792+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 100163584 unmapped: 3440640 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:48.805148+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 99450880 unmapped: 4153344 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.921881676s of 10.000579834s, submitted: 45
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:49.805261+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202201 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 99631104 unmapped: 3973120 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:50.805383+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fab73000/0x0/0x4ffc00000, data 0x123aeb8/0x1319000, compress 0x0/0x0/0x0, omap 0x20329, meta 0x3d4fcd7), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 99639296 unmapped: 3964928 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:51.805539+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 99639296 unmapped: 3964928 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:52.805726+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 99647488 unmapped: 3956736 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fab63000/0x0/0x4ffc00000, data 0x124b703/0x1329000, compress 0x0/0x0/0x0, omap 0x20571, meta 0x3d4fa8f), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:53.805869+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 99655680 unmapped: 3948544 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:54.805992+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201845 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 144 handle_osd_map epochs [144,145], i have 145, src has [1,145]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 99745792 unmapped: 3858432 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:55.806122+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 99745792 unmapped: 3858432 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:56.806354+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 99819520 unmapped: 3784704 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:57.806537+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fab30000/0x0/0x4ffc00000, data 0x127cb27/0x135c000, compress 0x0/0x0/0x0, omap 0x20929, meta 0x3d4f6d7), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 99819520 unmapped: 3784704 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:58.806680+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 99819520 unmapped: 3784704 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.820492744s of 10.001801491s, submitted: 62
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:59.806781+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207095 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 99942400 unmapped: 3661824 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:00.806973+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 99942400 unmapped: 3661824 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:01.807159+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 100900864 unmapped: 2703360 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:02.807393+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fab0b000/0x0/0x4ffc00000, data 0x12a173d/0x1381000, compress 0x0/0x0/0x0, omap 0x20e4b, meta 0x3d4f1b5), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 3219456 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:03.807533+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 3219456 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:04.807663+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208529 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 100442112 unmapped: 3162112 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:05.807831+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:06.808107+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 100605952 unmapped: 2998272 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:07.808281+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 100605952 unmapped: 2998272 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faaec000/0x0/0x4ffc00000, data 0x12bcdd0/0x139e000, compress 0x0/0x0/0x0, omap 0x211f6, meta 0x3d4ee0a), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:08.808477+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 100605952 unmapped: 2998272 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.933694839s of 10.000862122s, submitted: 45
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:09.808682+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 100769792 unmapped: 2834432 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215243 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:10.809504+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 100769792 unmapped: 2834432 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:11.809660+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 100769792 unmapped: 2834432 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:12.809837+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 2383872 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faaa5000/0x0/0x4ffc00000, data 0x13044d6/0x13e6000, compress 0x0/0x0/0x0, omap 0x215ab, meta 0x3d4ea55), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:13.810020+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 2383872 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:14.810117+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 101318656 unmapped: 2285568 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223771 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:15.810223+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 101482496 unmapped: 2121728 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:16.810339+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 101482496 unmapped: 2121728 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faa79000/0x0/0x4ffc00000, data 0x1330724/0x1413000, compress 0x0/0x0/0x0, omap 0x21c4c, meta 0x3d4e3b4), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:17.810473+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 101605376 unmapped: 1998848 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:18.810606+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 101605376 unmapped: 1998848 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.903629303s of 10.000597000s, submitted: 48
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:19.810733+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 102596608 unmapped: 1007616 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222635 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:20.810953+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 102629376 unmapped: 974848 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:21.811159+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 102342656 unmapped: 1261568 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:22.811353+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 102432768 unmapped: 1171456 heap: 103604224 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faa27000/0x0/0x4ffc00000, data 0x138224c/0x1465000, compress 0x0/0x0/0x0, omap 0x223d0, meta 0x3d4dc30), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faa27000/0x0/0x4ffc00000, data 0x138224c/0x1465000, compress 0x0/0x0/0x0, omap 0x223d0, meta 0x3d4dc30), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:23.811455+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 102596608 unmapped: 2056192 heap: 104652800 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 146 heartbeat osd_stat(store_statfs(0x4faa0c000/0x0/0x4ffc00000, data 0x139aead/0x147f000, compress 0x0/0x0/0x0, omap 0x224ae, meta 0x3d4db52), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:24.811562+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 102883328 unmapped: 1769472 heap: 104652800 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232147 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:25.811677+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 1753088 heap: 104652800 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: handle_auth_request added challenge on 0x55a03e5a3000
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:26.811814+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 103096320 unmapped: 1556480 heap: 104652800 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9ad000/0x0/0x4ffc00000, data 0x13f8ecd/0x14df000, compress 0x0/0x0/0x0, omap 0x22998, meta 0x3d4d668), peers [0,2] op hist [2])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_map Got map version 12
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa9ad000/0x0/0x4ffc00000, data 0x13f8ecd/0x14df000, compress 0x0/0x0/0x0, omap 0x22998, meta 0x3d4d668), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:27.811968+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 1998848 heap: 104652800 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa990000/0x0/0x4ffc00000, data 0x1413374/0x14fc000, compress 0x0/0x0/0x0, omap 0x22ac0, meta 0x3d4d540), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:28.812095+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 3047424 heap: 105701376 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_map Got map version 13
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.827425003s of 10.000213623s, submitted: 83
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:29.812225+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1990656 heap: 105701376 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241933 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:30.812428+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 104071168 unmapped: 1630208 heap: 105701376 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:31.812559+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 1359872 heap: 105701376 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa924000/0x0/0x4ffc00000, data 0x1480bf7/0x1568000, compress 0x0/0x0/0x0, omap 0x23088, meta 0x3d4cf78), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:32.812668+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 1359872 heap: 105701376 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:33.812775+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa925000/0x0/0x4ffc00000, data 0x1480b95/0x1567000, compress 0x0/0x0/0x0, omap 0x2311c, meta 0x3d4cee4), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 104161280 unmapped: 2588672 heap: 106749952 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:34.812889+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 105586688 unmapped: 1163264 heap: 106749952 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252691 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:35.813048+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 105660416 unmapped: 1089536 heap: 106749952 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:36.813166+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 106176512 unmapped: 573440 heap: 106749952 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:37.813453+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 557056 heap: 106749952 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:38.813546+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 106119168 unmapped: 630784 heap: 106749952 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa880000/0x0/0x4ffc00000, data 0x15278d4/0x160c000, compress 0x0/0x0/0x0, omap 0x23dd4, meta 0x3d4c22c), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:39.813687+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 1597440 heap: 107798528 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.365393639s of 10.561934471s, submitted: 104
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248801 data_alloc: 218103808 data_used: 10175
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:40.813925+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 106291200 unmapped: 1507328 heap: 107798528 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:41.814095+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 106340352 unmapped: 1458176 heap: 107798528 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa840000/0x0/0x4ffc00000, data 0x15640e6/0x164a000, compress 0x0/0x0/0x0, omap 0x23fda, meta 0x3d4c026), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:42.814263+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 105922560 unmapped: 1875968 heap: 107798528 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:43.814517+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 1867776 heap: 107798528 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:44.814676+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 1622016 heap: 108847104 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa81b000/0x0/0x4ffc00000, data 0x158b721/0x1671000, compress 0x0/0x0/0x0, omap 0x24103, meta 0x3d4befd), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260967 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:45.814844+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 2842624 heap: 109895680 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:46.815043+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 106938368 unmapped: 2957312 heap: 109895680 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:47.815166+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 106938368 unmapped: 2957312 heap: 109895680 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:48.815385+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 3383296 heap: 109895680 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7a4000/0x0/0x4ffc00000, data 0x16019dd/0x16e8000, compress 0x0/0x0/0x0, omap 0x24870, meta 0x3d4b790), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:49.815533+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 3383296 heap: 109895680 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.903057098s of 10.125585556s, submitted: 108
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267319 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:50.815790+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 3383296 heap: 109895680 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa7a1000/0x0/0x4ffc00000, data 0x16047df/0x16eb000, compress 0x0/0x0/0x0, omap 0x24902, meta 0x3d4b6fe), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:51.815914+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 3194880 heap: 109895680 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa79c000/0x0/0x4ffc00000, data 0x160625e/0x16ee000, compress 0x0/0x0/0x0, omap 0x24986, meta 0x3d4b67a), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:52.816110+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 3178496 heap: 109895680 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa77b000/0x0/0x4ffc00000, data 0x1625bb9/0x170e000, compress 0x0/0x0/0x0, omap 0x24986, meta 0x3d4b67a), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:53.816242+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 3350528 heap: 109895680 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:54.816356+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 2072576 heap: 109895680 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273451 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:55.816480+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa719000/0x0/0x4ffc00000, data 0x168856c/0x1771000, compress 0x0/0x0/0x0, omap 0x24ca9, meta 0x3d4b357), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 2064384 heap: 109895680 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:56.816641+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 2064384 heap: 109895680 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:57.816842+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 1941504 heap: 109895680 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:58.817032+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 1941504 heap: 109895680 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:59.817142+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 108126208 unmapped: 2818048 heap: 110944256 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.864830017s of 10.002592087s, submitted: 100
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278581 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:00.817314+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 3719168 heap: 110944256 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa6a0000/0x0/0x4ffc00000, data 0x1700cae/0x17ea000, compress 0x0/0x0/0x0, omap 0x25413, meta 0x3d4abed), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:01.817583+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 3710976 heap: 110944256 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:02.817902+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa6a0000/0x0/0x4ffc00000, data 0x1700d13/0x17ea000, compress 0x0/0x0/0x0, omap 0x254a5, meta 0x3d4ab5b), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 3710976 heap: 110944256 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:03.818148+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 3342336 heap: 110944256 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:04.818355+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 108576768 unmapped: 2367488 heap: 110944256 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287847 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:05.818607+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 3293184 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:06.818852+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 108920832 unmapped: 3072000 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:07.819043+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 108937216 unmapped: 3055616 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa604000/0x0/0x4ffc00000, data 0x179ff45/0x1887000, compress 0x0/0x0/0x0, omap 0x25d33, meta 0x3d4a2cd), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:08.819178+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 108937216 unmapped: 3055616 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa604000/0x0/0x4ffc00000, data 0x179ff45/0x1887000, compress 0x0/0x0/0x0, omap 0x25d33, meta 0x3d4a2cd), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:09.819425+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 2875392 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa604000/0x0/0x4ffc00000, data 0x179ffaa/0x1887000, compress 0x0/0x0/0x0, omap 0x25d33, meta 0x3d4a2cd), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.830165863s of 10.003282547s, submitted: 95
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287467 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:10.819563+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 109125632 unmapped: 2867200 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:11.819698+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 109125632 unmapped: 2867200 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:12.819848+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 3276800 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:13.820251+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 3276800 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:14.820364+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 3268608 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286895 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:15.820477+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 3268608 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa5b8000/0x0/0x4ffc00000, data 0x17efb65/0x18d4000, compress 0x0/0x0/0x0, omap 0x2629e, meta 0x3d49d62), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:16.820638+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 108797952 unmapped: 3194880 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:17.820760+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 108797952 unmapped: 3194880 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:18.820907+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 108806144 unmapped: 3186688 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa5a3000/0x0/0x4ffc00000, data 0x180608c/0x18e9000, compress 0x0/0x0/0x0, omap 0x2669c, meta 0x3d49964), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:19.821034+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 2129920 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287531 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:20.821181+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 2129920 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa596000/0x0/0x4ffc00000, data 0x181380b/0x18f6000, compress 0x0/0x0/0x0, omap 0x26777, meta 0x3d49889), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:21.821362+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 109862912 unmapped: 2129920 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.925903320s of 12.002732277s, submitted: 43
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:22.821464+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 2048000 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa585000/0x0/0x4ffc00000, data 0x1823281/0x1907000, compress 0x0/0x0/0x0, omap 0x26809, meta 0x3d497f7), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:23.821666+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 110018560 unmapped: 1974272 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:24.823748+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa565000/0x0/0x4ffc00000, data 0x1840b3b/0x1926000, compress 0x0/0x0/0x0, omap 0x26a08, meta 0x3d495f8), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 110051328 unmapped: 1941504 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293543 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:25.823871+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 110051328 unmapped: 1941504 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:26.824055+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 110075904 unmapped: 1916928 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:27.824197+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread fragmentation_score=0.000033 took=0.000034s
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 110075904 unmapped: 1916928 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:28.824323+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 110084096 unmapped: 1908736 heap: 111992832 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:29.824426+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 2785280 heap: 113041408 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa52b000/0x0/0x4ffc00000, data 0x187bd23/0x1961000, compress 0x0/0x0/0x0, omap 0x26c50, meta 0x3d493b0), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298271 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:30.824650+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 110428160 unmapped: 2613248 heap: 113041408 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa518000/0x0/0x4ffc00000, data 0x188e623/0x1974000, compress 0x0/0x0/0x0, omap 0x26d2b, meta 0x3d492d5), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:31.824881+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 2605056 heap: 113041408 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:32.825021+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.227259636s of 10.338641167s, submitted: 70
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2531328 heap: 113041408 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:33.825200+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 110485504 unmapped: 2555904 heap: 113041408 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa4b5000/0x0/0x4ffc00000, data 0x18efcc3/0x19d6000, compress 0x0/0x0/0x0, omap 0x27097, meta 0x3d48f69), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:34.825288+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 110534656 unmapped: 2506752 heap: 113041408 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308265 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:35.825495+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 2473984 heap: 113041408 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa4a8000/0x0/0x4ffc00000, data 0x18fc512/0x19e3000, compress 0x0/0x0/0x0, omap 0x27204, meta 0x3d48dfc), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:36.825735+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fa494000/0x0/0x4ffc00000, data 0x1912562/0x19f8000, compress 0x0/0x0/0x0, omap 0x273ba, meta 0x3d48c46), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 3194880 heap: 114089984 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:37.826017+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 111124480 unmapped: 2965504 heap: 114089984 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:38.826169+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 111132672 unmapped: 2957312 heap: 114089984 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:39.826286+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 148 handle_osd_map epochs [148,149], i have 149, src has [1,149]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 1499136 heap: 114089984 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322437 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:40.826505+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 1736704 heap: 115138560 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:41.826675+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 1671168 heap: 115138560 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:42.826857+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f9231000/0x0/0x4ffc00000, data 0x19ccf32/0x1ab6000, compress 0x0/0x0/0x0, omap 0x27971, meta 0x4ee868f), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 1376256 heap: 115138560 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.400063515s of 10.665641785s, submitted: 137
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:43.827122+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 1376256 heap: 115138560 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:44.827351+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 1368064 heap: 115138560 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1316181 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:45.827524+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 1368064 heap: 115138560 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:46.827726+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f9234000/0x0/0x4ffc00000, data 0x19ccffc/0x1ab6000, compress 0x0/0x0/0x0, omap 0x27edc, meta 0x4ee8124), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 1359872 heap: 115138560 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:47.827895+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f9234000/0x0/0x4ffc00000, data 0x19ccffc/0x1ab6000, compress 0x0/0x0/0x0, omap 0x27edc, meta 0x4ee8124), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 1359872 heap: 115138560 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 149 ms_handle_reset con 0x55a03e5a3000 session 0x55a03e6e16c0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:48.828037+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114057216 unmapped: 1081344 heap: 115138560 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_map Got map version 14
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:49.828168+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 1048576 heap: 115138560 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317619 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:50.828602+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 1048576 heap: 115138560 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:51.828824+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 1040384 heap: 115138560 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:52.829056+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 2088960 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9231000/0x0/0x4ffc00000, data 0x19ceb42/0x1ab9000, compress 0x0/0x0/0x0, omap 0x2835a, meta 0x4ee7ca6), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.870506287s of 10.002090454s, submitted: 245
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:53.829250+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 2088960 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:54.829386+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 2088960 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321015 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:55.829581+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 2088960 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:56.829701+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 2088960 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:57.829868+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9233000/0x0/0x4ffc00000, data 0x19cec11/0x1ab9000, compress 0x0/0x0/0x0, omap 0x28ac4, meta 0x4ee753c), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 2088960 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:58.830055+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 2097152 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9231000/0x0/0x4ffc00000, data 0x19cecd2/0x1aba000, compress 0x0/0x0/0x0, omap 0x28cc3, meta 0x4ee733d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: handle_auth_request added challenge on 0x55a03d067c00
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:59.830217+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 1933312 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9231000/0x0/0x4ffc00000, data 0x19cecd2/0x1aba000, compress 0x0/0x0/0x0, omap 0x28cc3, meta 0x4ee733d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323393 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:00.830386+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 1916928 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_map Got map version 15
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:01.830563+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 1900544 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:02.830712+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 1900544 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.911300659s of 10.002430916s, submitted: 44
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:03.830883+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 1900544 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:04.831025+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9234000/0x0/0x4ffc00000, data 0x19cef7d/0x1ab8000, compress 0x0/0x0/0x0, omap 0x29675, meta 0x4ee698b), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 1900544 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:05.831167+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323315 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 1900544 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:06.831409+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 1900544 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:07.831541+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 1900544 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:08.831722+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9234000/0x0/0x4ffc00000, data 0x19cef4b/0x1ab8000, compress 0x0/0x0/0x0, omap 0x29a73, meta 0x4ee658d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114278400 unmapped: 1908736 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:09.831896+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9234000/0x0/0x4ffc00000, data 0x19cf0d7/0x1ab8000, compress 0x0/0x0/0x0, omap 0x29b4e, meta 0x4ee64b2), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114278400 unmapped: 1908736 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:10.832101+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324273 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114278400 unmapped: 1908736 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9234000/0x0/0x4ffc00000, data 0x19cf0d7/0x1ab8000, compress 0x0/0x0/0x0, omap 0x29b4e, meta 0x4ee64b2), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:11.832257+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114278400 unmapped: 1908736 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:12.832412+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114278400 unmapped: 1908736 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.929247856s of 10.003057480s, submitted: 31
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:13.832546+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114278400 unmapped: 1908736 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:14.832745+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114278400 unmapped: 1908736 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:15.832911+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324273 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114278400 unmapped: 1908736 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:16.833117+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9235000/0x0/0x4ffc00000, data 0x19cf044/0x1ab7000, compress 0x0/0x0/0x0, omap 0x2a226, meta 0x4ee5dda), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 1900544 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:17.833328+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 1900544 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:18.833520+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 1892352 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9234000/0x0/0x4ffc00000, data 0x19cf13c/0x1ab7000, compress 0x0/0x0/0x0, omap 0x2a393, meta 0x4ee5c6d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:19.833692+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 1892352 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:20.833881+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323683 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 1892352 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:21.834058+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 1892352 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9234000/0x0/0x4ffc00000, data 0x19cf10e/0x1ab7000, compress 0x0/0x0/0x0, omap 0x2a7da, meta 0x4ee5826), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:22.834172+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 1892352 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.807897568s of 10.001960754s, submitted: 24
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:23.834370+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 1884160 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:24.834505+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 1826816 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:25.834664+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323683 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 1826816 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:26.834777+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 1826816 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:27.834904+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9231000/0x0/0x4ffc00000, data 0x19cf373/0x1ab9000, compress 0x0/0x0/0x0, omap 0x2b0b1, meta 0x4ee4f4f), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 1826816 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:28.835110+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114368512 unmapped: 1818624 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:29.835272+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114368512 unmapped: 1818624 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:30.835462+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328803 data_alloc: 218103808 data_used: 10020
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114368512 unmapped: 1818624 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9231000/0x0/0x4ffc00000, data 0x19d1108/0x1abb000, compress 0x0/0x0/0x0, omap 0x2b6fa, meta 0x4ee4906), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:31.835628+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 1785856 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:32.835769+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 1761280 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.895081520s of 10.002231598s, submitted: 60
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:33.835902+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 1761280 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:34.836057+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114434048 unmapped: 1753088 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:35.836225+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328501 data_alloc: 218103808 data_used: 10175
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 114434048 unmapped: 1753088 heap: 116187136 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:36.836396+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 679936 heap: 117235712 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f8089000/0x0/0x4ffc00000, data 0x19d48b2/0x1abf000, compress 0x0/0x0/0x0, omap 0x2bd4f, meta 0x60842b1), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:37.836566+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 638976 heap: 117235712 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f8088000/0x0/0x4ffc00000, data 0x19d48e5/0x1abf000, compress 0x0/0x0/0x0, omap 0x2bc23, meta 0x60843dd), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:38.836732+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 630784 heap: 117235712 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:39.836911+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f808c000/0x0/0x4ffc00000, data 0x19d49af/0x1ac0000, compress 0x0/0x0/0x0, omap 0x2bd43, meta 0x60842bd), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 630784 heap: 117235712 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:40.837168+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333731 data_alloc: 218103808 data_used: 10670
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 630784 heap: 117235712 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f808b000/0x0/0x4ffc00000, data 0x19d4a46/0x1ac0000, compress 0x0/0x0/0x0, omap 0x2bd43, meta 0x60842bd), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:41.837366+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 598016 heap: 117235712 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:42.837498+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 598016 heap: 117235712 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.887531281s of 10.002774239s, submitted: 70
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:43.837651+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 589824 heap: 117235712 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:44.837754+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 581632 heap: 117235712 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:45.837894+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339841 data_alloc: 218103808 data_used: 10670
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f8085000/0x0/0x4ffc00000, data 0x19d693f/0x1ac5000, compress 0x0/0x0/0x0, omap 0x2c3c0, meta 0x6083c40), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 548864 heap: 117235712 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:46.838029+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 532480 heap: 117235712 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:47.838182+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 155 heartbeat osd_stat(store_statfs(0x4f8081000/0x0/0x4ffc00000, data 0x19d8439/0x1ac7000, compress 0x0/0x0/0x0, omap 0x2c5d2, meta 0x6083a2e), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 532480 heap: 117235712 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:48.838365+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 532480 heap: 117235712 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:49.838481+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 532480 heap: 117235712 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:50.838656+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341305 data_alloc: 218103808 data_used: 10670
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 630784 heap: 117235712 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:51.838816+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 155 heartbeat osd_stat(store_statfs(0x4f8086000/0x0/0x4ffc00000, data 0x19d83a4/0x1ac6000, compress 0x0/0x0/0x0, omap 0x2c97a, meta 0x6083686), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 630784 heap: 117235712 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:52.839060+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 630784 heap: 117235712 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:53.839470+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 630784 heap: 117235712 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.645964622s of 11.002327919s, submitted: 75
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:54.839659+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 630784 heap: 117235712 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: handle_auth_request added challenge on 0x55a03d067400
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:55.839817+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346223 data_alloc: 218103808 data_used: 10825
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 475136 heap: 117235712 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:56.840030+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_map Got map version 16
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 434176 heap: 117235712 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:57.840187+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f807d000/0x0/0x4ffc00000, data 0x19da1a2/0x1acc000, compress 0x0/0x0/0x0, omap 0x2d306, meta 0x6082cfa), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 401408 heap: 117235712 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:58.840324+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 385024 heap: 117235712 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:59.840464+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 1351680 heap: 118284288 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:00.840690+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352263 data_alloc: 218103808 data_used: 10825
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 1351680 heap: 118284288 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:01.840867+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8067000/0x0/0x4ffc00000, data 0x19f29b4/0x1ae4000, compress 0x0/0x0/0x0, omap 0x2d5d6, meta 0x6082a2a), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 1507328 heap: 118284288 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:02.841150+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f803b000/0x0/0x4ffc00000, data 0x1a1e42a/0x1b10000, compress 0x0/0x0/0x0, omap 0x2d666, meta 0x608299a), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 1507328 heap: 118284288 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:03.841282+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.110334396s of 10.002474785s, submitted: 64
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 1277952 heap: 118284288 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:04.841432+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 1269760 heap: 118284288 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:05.841616+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1358349 data_alloc: 218103808 data_used: 10825
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 1171456 heap: 118284288 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:06.841745+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f7fe4000/0x0/0x4ffc00000, data 0x1a7675c/0x1b68000, compress 0x0/0x0/0x0, omap 0x2dae6, meta 0x608251a), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 1130496 heap: 118284288 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:07.841955+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 1130496 heap: 118284288 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:08.842119+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 118374400 unmapped: 2007040 heap: 120381440 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:09.842284+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 156 handle_osd_map epochs [156,157], i have 157, src has [1,157]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 2252800 heap: 120381440 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:10.842488+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1370193 data_alloc: 218103808 data_used: 10825
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 2064384 heap: 120381440 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:11.842661+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f7f3f000/0x0/0x4ffc00000, data 0x1b17aa8/0x1c0a000, compress 0x0/0x0/0x0, omap 0x2e15e, meta 0x6081ea2), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 2064384 heap: 120381440 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:12.842837+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 1908736 heap: 120381440 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:13.842968+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.685726643s of 10.004467964s, submitted: 113
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 118390784 unmapped: 1990656 heap: 120381440 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:14.843129+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 119504896 unmapped: 876544 heap: 120381440 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:15.843273+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1377389 data_alloc: 218103808 data_used: 10825
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 1171456 heap: 120381440 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:16.843399+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 119316480 unmapped: 1064960 heap: 120381440 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:17.843554+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f7e81000/0x0/0x4ffc00000, data 0x1bd5fad/0x1cca000, compress 0x0/0x0/0x0, omap 0x2e8f6, meta 0x608170a), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 1990656 heap: 121430016 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:18.843744+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 119595008 unmapped: 1835008 heap: 121430016 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f7e5b000/0x0/0x4ffc00000, data 0x1bfb8fb/0x1cf0000, compress 0x0/0x0/0x0, omap 0x2eaee, meta 0x6081512), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:19.843895+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 1728512 heap: 121430016 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:20.844087+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384599 data_alloc: 218103808 data_used: 10825
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 119824384 unmapped: 1605632 heap: 121430016 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:21.844220+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f7de9000/0x0/0x4ffc00000, data 0x1c6ae24/0x1d60000, compress 0x0/0x0/0x0, omap 0x2f00d, meta 0x6080ff3), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 158 handle_osd_map epochs [158,159], i have 158, src has [1,159]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 1286144 heap: 122478592 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:22.844397+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 1286144 heap: 122478592 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:23.844595+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f7dcb000/0x0/0x4ffc00000, data 0x1c8a69d/0x1d7f000, compress 0x0/0x0/0x0, omap 0x2f24d, meta 0x6080db3), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.634226799s of 10.002802849s, submitted: 129
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 121217024 unmapped: 2310144 heap: 123527168 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f7d9a000/0x0/0x4ffc00000, data 0x1cb9f56/0x1daf000, compress 0x0/0x0/0x0, omap 0x2f325, meta 0x6080cdb), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:24.844762+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 2777088 heap: 123527168 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f7d87000/0x0/0x4ffc00000, data 0x1cd0500/0x1dc5000, compress 0x0/0x0/0x0, omap 0x2f445, meta 0x6080bbb), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:25.844937+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1388485 data_alloc: 218103808 data_used: 10825
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 2777088 heap: 123527168 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:26.845083+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 2777088 heap: 123527168 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:27.845193+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 2777088 heap: 123527168 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:28.845355+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 2777088 heap: 123527168 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:29.845500+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 120954880 unmapped: 2572288 heap: 123527168 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:30.845689+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f7d1a000/0x0/0x4ffc00000, data 0x1d39fd8/0x1e30000, compress 0x0/0x0/0x0, omap 0x2f7a5, meta 0x608085b), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1398505 data_alloc: 218103808 data_used: 10976
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 122290176 unmapped: 1236992 heap: 123527168 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:31.845855+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 122290176 unmapped: 1236992 heap: 123527168 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f7ce7000/0x0/0x4ffc00000, data 0x1d6a8d3/0x1e62000, compress 0x0/0x0/0x0, omap 0x2f8a6, meta 0x608075a), peers [0,2] op hist [0,1])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:32.845991+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 1228800 heap: 123527168 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:33.846147+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.868089676s of 10.002170563s, submitted: 76
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 121151488 unmapped: 2375680 heap: 123527168 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:34.846336+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 121380864 unmapped: 2146304 heap: 123527168 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:35.846532+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405909 data_alloc: 218103808 data_used: 10976
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 121528320 unmapped: 3047424 heap: 124575744 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:36.846696+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f7c68000/0x0/0x4ffc00000, data 0x1deca74/0x1ee3000, compress 0x0/0x0/0x0, omap 0x2ff1e, meta 0x60800e2), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 121831424 unmapped: 2744320 heap: 124575744 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:37.846804+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 121987072 unmapped: 2588672 heap: 124575744 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:38.846961+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 121995264 unmapped: 2580480 heap: 124575744 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:39.847119+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1073152 heap: 124575744 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:40.847370+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f7bda000/0x0/0x4ffc00000, data 0x1e7c573/0x1f72000, compress 0x0/0x0/0x0, omap 0x301a6, meta 0x607fe5a), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411141 data_alloc: 218103808 data_used: 10976
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 123609088 unmapped: 2015232 heap: 125624320 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:41.847492+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 123609088 unmapped: 2015232 heap: 125624320 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:42.847614+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 1548288 heap: 125624320 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:43.847703+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 1540096 heap: 125624320 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.098937988s of 10.271041870s, submitted: 80
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:44.847807+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 1351680 heap: 125624320 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:45.847940+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417113 data_alloc: 218103808 data_used: 10976
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f7b47000/0x0/0x4ffc00000, data 0x1f0d26f/0x2004000, compress 0x0/0x0/0x0, omap 0x3081e, meta 0x607f7e2), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 124428288 unmapped: 1196032 heap: 125624320 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f7b32000/0x0/0x4ffc00000, data 0x1f233fe/0x201a000, compress 0x0/0x0/0x0, omap 0x308f6, meta 0x607f70a), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:46.848126+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 124436480 unmapped: 1187840 heap: 125624320 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:47.848432+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f7b11000/0x0/0x4ffc00000, data 0x1f43a45/0x203b000, compress 0x0/0x0/0x0, omap 0x30c56, meta 0x607f3aa), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 125583360 unmapped: 1089536 heap: 126672896 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:48.848584+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 2695168 heap: 127721472 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:49.848714+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 2613248 heap: 127721472 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:50.848917+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1428141 data_alloc: 218103808 data_used: 10976
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f7a84000/0x0/0x4ffc00000, data 0x1fcfdf3/0x20c7000, compress 0x0/0x0/0x0, omap 0x31046, meta 0x607efba), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 2613248 heap: 127721472 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:51.849161+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f7a84000/0x0/0x4ffc00000, data 0x1fcfdf3/0x20c7000, compress 0x0/0x0/0x0, omap 0x31046, meta 0x607efba), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 2539520 heap: 127721472 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:52.849395+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 2539520 heap: 127721472 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:53.849608+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 125485056 unmapped: 2236416 heap: 127721472 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.819445610s of 10.002679825s, submitted: 81
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:54.849836+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 125681664 unmapped: 2039808 heap: 127721472 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:55.849989+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1435145 data_alloc: 218103808 data_used: 10976
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 125681664 unmapped: 2039808 heap: 127721472 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:56.850146+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 125943808 unmapped: 1777664 heap: 127721472 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f79d9000/0x0/0x4ffc00000, data 0x2079032/0x2171000, compress 0x0/0x0/0x0, omap 0x318b6, meta 0x607e74a), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:57.850336+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f79d9000/0x0/0x4ffc00000, data 0x2079032/0x2171000, compress 0x0/0x0/0x0, omap 0x318b6, meta 0x607e74a), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 126156800 unmapped: 1564672 heap: 127721472 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:58.850563+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 126189568 unmapped: 1531904 heap: 127721472 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:59.851257+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 126287872 unmapped: 2482176 heap: 128770048 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:00.851598+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f7946000/0x0/0x4ffc00000, data 0x210d6fc/0x2205000, compress 0x0/0x0/0x0, omap 0x31d36, meta 0x607e2ca), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1435687 data_alloc: 218103808 data_used: 10976
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 125878272 unmapped: 2891776 heap: 128770048 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:01.851851+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 126935040 unmapped: 2883584 heap: 129818624 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:02.852233+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 126951424 unmapped: 2867200 heap: 129818624 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:03.852682+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f78d2000/0x0/0x4ffc00000, data 0x2182028/0x2279000, compress 0x0/0x0/0x0, omap 0x3204e, meta 0x607dfb2), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 127131648 unmapped: 2686976 heap: 129818624 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:04.853079+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.507986069s of 10.700219154s, submitted: 99
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 127139840 unmapped: 2678784 heap: 129818624 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:05.853395+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1443001 data_alloc: 218103808 data_used: 10976
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 127148032 unmapped: 3719168 heap: 130867200 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:06.854395+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 126271488 unmapped: 4595712 heap: 130867200 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:07.854995+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 126271488 unmapped: 4595712 heap: 130867200 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:08.855515+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 126320640 unmapped: 4546560 heap: 130867200 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:09.855787+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f788a000/0x0/0x4ffc00000, data 0x21ccb34/0x22c2000, compress 0x0/0x0/0x0, omap 0x3267e, meta 0x607d982), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 127836160 unmapped: 3031040 heap: 130867200 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:10.856255+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1449725 data_alloc: 218103808 data_used: 10980
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:11.856790+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 127836160 unmapped: 3031040 heap: 130867200 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:12.856965+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 127836160 unmapped: 3031040 heap: 130867200 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:13.857369+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 128000000 unmapped: 3915776 heap: 131915776 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f77e8000/0x0/0x4ffc00000, data 0x226e4eb/0x2363000, compress 0x0/0x0/0x0, omap 0x329de, meta 0x607d622), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:14.857524+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 128245760 unmapped: 3670016 heap: 131915776 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.805747986s of 10.009436607s, submitted: 149
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:15.857769+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 128319488 unmapped: 3596288 heap: 131915776 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1455667 data_alloc: 218103808 data_used: 10825
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:16.858121+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 128344064 unmapped: 3571712 heap: 131915776 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:17.858390+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 128344064 unmapped: 3571712 heap: 131915776 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 161 ms_handle_reset con 0x55a03d067400 session 0x55a03ef9e540
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 161 ms_handle_reset con 0x55a03d067c00 session 0x55a03dda1880
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:18.858651+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 129425408 unmapped: 2490368 heap: 131915776 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_map Got map version 17
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:19.858908+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 2048000 heap: 132964352 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f7772000/0x0/0x4ffc00000, data 0x22e3601/0x23da000, compress 0x0/0x0/0x0, omap 0x32eee, meta 0x607d112), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:20.859192+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 2048000 heap: 132964352 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464625 data_alloc: 218103808 data_used: 10825
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:21.859402+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 131104768 unmapped: 1859584 heap: 132964352 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:22.859591+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 131489792 unmapped: 1474560 heap: 132964352 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:23.859783+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 130334720 unmapped: 2629632 heap: 132964352 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f770b000/0x0/0x4ffc00000, data 0x2348373/0x2441000, compress 0x0/0x0/0x0, omap 0x336bb, meta 0x607c945), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:24.859965+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 130334720 unmapped: 2629632 heap: 132964352 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.560550690s of 10.006949425s, submitted: 277
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:25.860793+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 2449408 heap: 132964352 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468827 data_alloc: 218103808 data_used: 10825
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:26.860941+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 130523136 unmapped: 2441216 heap: 132964352 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f76a0000/0x0/0x4ffc00000, data 0x23b1a94/0x24ab000, compress 0x0/0x0/0x0, omap 0x33ceb, meta 0x607c315), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:27.861119+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 2228224 heap: 132964352 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:28.861253+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 2097152 heap: 132964352 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:29.861392+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 1900544 heap: 134012928 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:30.861537+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 1900544 heap: 134012928 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f7607000/0x0/0x4ffc00000, data 0x244b3b6/0x2544000, compress 0x0/0x0/0x0, omap 0x340db, meta 0x607bf25), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1474187 data_alloc: 218103808 data_used: 10825
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:31.861696+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 131735552 unmapped: 2277376 heap: 134012928 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:32.861824+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 2244608 heap: 134012928 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:33.861985+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 2170880 heap: 134012928 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f75b3000/0x0/0x4ffc00000, data 0x249f3eb/0x2598000, compress 0x0/0x0/0x0, omap 0x3428b, meta 0x607bd75), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:34.862127+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132038656 unmapped: 1974272 heap: 134012928 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.867843628s of 10.007029533s, submitted: 72
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:35.862267+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132136960 unmapped: 1875968 heap: 134012928 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1477785 data_alloc: 218103808 data_used: 10825
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:36.862379+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132136960 unmapped: 1875968 heap: 134012928 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:37.862543+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133447680 unmapped: 1613824 heap: 135061504 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:38.862676+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133447680 unmapped: 1613824 heap: 135061504 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:39.862837+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f7548000/0x0/0x4ffc00000, data 0x250e40a/0x2604000, compress 0x0/0x0/0x0, omap 0x34c1b, meta 0x607b3e5), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 2105344 heap: 136110080 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:40.863026+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 134250496 unmapped: 1859584 heap: 136110080 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1485859 data_alloc: 218103808 data_used: 10670
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:41.863209+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133750784 unmapped: 2359296 heap: 136110080 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f7509000/0x0/0x4ffc00000, data 0x254a433/0x2641000, compress 0x0/0x0/0x0, omap 0x34e5b, meta 0x607b1a5), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:42.863368+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133816320 unmapped: 2293760 heap: 136110080 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:43.863546+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133824512 unmapped: 2285568 heap: 136110080 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:44.863685+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 134094848 unmapped: 2015232 heap: 136110080 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.832171440s of 10.007154465s, submitted: 75
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:45.863847+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 134094848 unmapped: 2015232 heap: 136110080 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1493361 data_alloc: 218103808 data_used: 10670
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:46.864003+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133136384 unmapped: 4022272 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:47.864180+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133136384 unmapped: 4022272 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f7492000/0x0/0x4ffc00000, data 0x25bd151/0x26b8000, compress 0x0/0x0/0x0, omap 0x3556b, meta 0x607aa95), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:48.864369+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133144576 unmapped: 4014080 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:49.864508+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133029888 unmapped: 4128768 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:50.864657+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133029888 unmapped: 4128768 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497209 data_alloc: 218103808 data_used: 10670
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:51.864805+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133029888 unmapped: 4128768 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:52.864915+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133029888 unmapped: 4128768 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:53.865036+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133029888 unmapped: 4128768 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f746a000/0x0/0x4ffc00000, data 0x25e9328/0x26e2000, compress 0x0/0x0/0x0, omap 0x35ac3, meta 0x607a53d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:54.865174+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133029888 unmapped: 4128768 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.928264618s of 10.006814003s, submitted: 49
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:55.865331+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133029888 unmapped: 4128768 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501815 data_alloc: 218103808 data_used: 10670
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:56.865493+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133054464 unmapped: 4104192 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:57.865652+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133054464 unmapped: 4104192 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f7461000/0x0/0x4ffc00000, data 0x25ecb61/0x26e7000, compress 0x0/0x0/0x0, omap 0x35f00, meta 0x607a100), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:58.865783+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133054464 unmapped: 4104192 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f7461000/0x0/0x4ffc00000, data 0x25ecb61/0x26e7000, compress 0x0/0x0/0x0, omap 0x35f00, meta 0x607a100), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:59.865907+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132923392 unmapped: 4235264 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:00.866077+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132923392 unmapped: 4235264 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 166 handle_osd_map epochs [166,167], i have 166, src has [1,167]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1505673 data_alloc: 218103808 data_used: 10670
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:01.866226+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132939776 unmapped: 4218880 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f745f000/0x0/0x4ffc00000, data 0x25ee761/0x26eb000, compress 0x0/0x0/0x0, omap 0x36275, meta 0x6079d8b), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:02.866391+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132939776 unmapped: 4218880 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:03.866581+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132939776 unmapped: 4218880 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:04.866810+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132939776 unmapped: 4218880 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.888524055s of 10.012919426s, submitted: 89
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:05.867009+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132939776 unmapped: 4218880 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1505673 data_alloc: 218103808 data_used: 10670
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:06.867342+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132939776 unmapped: 4218880 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:07.867572+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f745f000/0x0/0x4ffc00000, data 0x25ee761/0x26eb000, compress 0x0/0x0/0x0, omap 0x363dd, meta 0x6079c23), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132939776 unmapped: 4218880 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:08.867751+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132939776 unmapped: 4218880 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f7461000/0x0/0x4ffc00000, data 0x25ee761/0x26eb000, compress 0x0/0x0/0x0, omap 0x3658d, meta 0x6079a73), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:09.867940+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132939776 unmapped: 4218880 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:10.868199+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132956160 unmapped: 4202496 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1507857 data_alloc: 218103808 data_used: 10670
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:11.868520+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132956160 unmapped: 4202496 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:12.868783+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132956160 unmapped: 4202496 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 168 heartbeat osd_stat(store_statfs(0x4f745d000/0x0/0x4ffc00000, data 0x25f0395/0x26ed000, compress 0x0/0x0/0x0, omap 0x3661d, meta 0x60799e3), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 168 handle_osd_map epochs [169,169], i have 169, src has [1,169]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:13.868919+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132964352 unmapped: 4194304 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:14.869056+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132972544 unmapped: 4186112 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.903256416s of 10.002158165s, submitted: 71
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:15.869218+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132972544 unmapped: 4186112 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1511173 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:16.869391+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132972544 unmapped: 4186112 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:17.869578+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132972544 unmapped: 4186112 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:18.869821+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f745b000/0x0/0x4ffc00000, data 0x25f2166/0x26f1000, compress 0x0/0x0/0x0, omap 0x36bd2, meta 0x607942e), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132972544 unmapped: 4186112 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:19.869945+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132972544 unmapped: 4186112 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 169 handle_osd_map epochs [169,170], i have 170, src has [1,170]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:20.870162+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 4177920 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1513279 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:21.870340+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 4177920 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 170 handle_osd_map epochs [171,171], i have 170, src has [1,171]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:22.870495+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 4169728 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f7454000/0x0/0x4ffc00000, data 0x25f57ab/0x26f6000, compress 0x0/0x0/0x0, omap 0x36e17, meta 0x60791e9), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:23.870623+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 4169728 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:24.870776+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 4169728 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:25.870930+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 4169728 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1516053 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:26.871080+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 4169728 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 171 heartbeat osd_stat(store_statfs(0x4f7454000/0x0/0x4ffc00000, data 0x25f57ab/0x26f6000, compress 0x0/0x0/0x0, omap 0x36e17, meta 0x60791e9), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:27.871221+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 4169728 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:28.871391+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 4169728 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.005483627s of 14.247331619s, submitted: 48
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:29.871537+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 4169728 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:30.871725+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132997120 unmapped: 4161536 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1518827 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:31.871886+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133005312 unmapped: 4153344 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:32.872011+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 4145152 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 172 heartbeat osd_stat(store_statfs(0x4f7451000/0x0/0x4ffc00000, data 0x25f724a/0x26f9000, compress 0x0/0x0/0x0, omap 0x36dd7, meta 0x6079229), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:33.872170+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 4145152 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:34.872393+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 14K writes, 54K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s
                                           Cumulative WAL: 14K writes, 4598 syncs, 3.14 writes per sync, written: 0.05 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7289 writes, 24K keys, 7289 commit groups, 1.0 writes per commit group, ingest: 34.81 MB, 0.06 MB/s
                                           Interval WAL: 7289 writes, 3168 syncs, 2.30 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 172 heartbeat osd_stat(store_statfs(0x4f7451000/0x0/0x4ffc00000, data 0x25f724a/0x26f9000, compress 0x0/0x0/0x0, omap 0x36dd7, meta 0x6079229), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 4145152 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:35.872608+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 4145152 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 172 heartbeat osd_stat(store_statfs(0x4f7451000/0x0/0x4ffc00000, data 0x25f724a/0x26f9000, compress 0x0/0x0/0x0, omap 0x36dd7, meta 0x6079229), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1518827 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:36.872810+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 4145152 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:37.872963+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 4145152 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:38.873111+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 4145152 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 172 ms_handle_reset con 0x55a03bad9c00 session 0x55a03a970000
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: handle_auth_request added challenge on 0x55a03cfeb000
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:39.873242+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133021696 unmapped: 4136960 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc ms_handle_reset ms_handle_reset con 0x55a03d19dc00
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3695062931
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: get_auth_request con 0x55a03e5ff400 auth_method 0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_configure stats_period=5
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 172 ms_handle_reset con 0x55a03d146000 session 0x55a03d0cf880
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: handle_auth_request added challenge on 0x55a03cfea800
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 172 ms_handle_reset con 0x55a03c609800 session 0x55a03d16ddc0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: handle_auth_request added challenge on 0x55a03d146000
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:40.873461+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132685824 unmapped: 4472832 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1518827 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:41.873593+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132685824 unmapped: 4472832 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 172 heartbeat osd_stat(store_statfs(0x4f7451000/0x0/0x4ffc00000, data 0x25f724a/0x26f9000, compress 0x0/0x0/0x0, omap 0x36dd7, meta 0x6079229), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:42.873690+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132685824 unmapped: 4472832 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 172 heartbeat osd_stat(store_statfs(0x4f7451000/0x0/0x4ffc00000, data 0x25f724a/0x26f9000, compress 0x0/0x0/0x0, omap 0x36dd7, meta 0x6079229), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:43.873840+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 172 heartbeat osd_stat(store_statfs(0x4f7451000/0x0/0x4ffc00000, data 0x25f724a/0x26f9000, compress 0x0/0x0/0x0, omap 0x36dd7, meta 0x6079229), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132685824 unmapped: 4472832 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:44.874046+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132685824 unmapped: 4472832 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 172 heartbeat osd_stat(store_statfs(0x4f7451000/0x0/0x4ffc00000, data 0x25f724a/0x26f9000, compress 0x0/0x0/0x0, omap 0x36dd7, meta 0x6079229), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:45.874216+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132685824 unmapped: 4472832 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:46.874382+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1518827 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132685824 unmapped: 4472832 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:47.874505+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132685824 unmapped: 4472832 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:48.874623+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132694016 unmapped: 4464640 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:49.874765+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 172 heartbeat osd_stat(store_statfs(0x4f7451000/0x0/0x4ffc00000, data 0x25f724a/0x26f9000, compress 0x0/0x0/0x0, omap 0x36dd7, meta 0x6079229), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132694016 unmapped: 4464640 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:50.874892+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132694016 unmapped: 4464640 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:51.874967+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1518827 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132694016 unmapped: 4464640 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:52.875049+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132694016 unmapped: 4464640 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:53.875146+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132694016 unmapped: 4464640 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:54.875272+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132694016 unmapped: 4464640 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 172 heartbeat osd_stat(store_statfs(0x4f7451000/0x0/0x4ffc00000, data 0x25f724a/0x26f9000, compress 0x0/0x0/0x0, omap 0x36dd7, meta 0x6079229), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:55.875396+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132694016 unmapped: 4464640 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:56.875545+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1518827 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132694016 unmapped: 4464640 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:57.875683+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132694016 unmapped: 4464640 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:58.875846+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132694016 unmapped: 4464640 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:59.875982+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132694016 unmapped: 4464640 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:00.876185+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132694016 unmapped: 4464640 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 172 heartbeat osd_stat(store_statfs(0x4f7451000/0x0/0x4ffc00000, data 0x25f724a/0x26f9000, compress 0x0/0x0/0x0, omap 0x36dd7, meta 0x6079229), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:01.876359+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1518827 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132694016 unmapped: 4464640 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:02.876708+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132694016 unmapped: 4464640 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:03.876850+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132694016 unmapped: 4464640 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:04.876995+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132702208 unmapped: 4456448 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:05.877152+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132702208 unmapped: 4456448 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 172 heartbeat osd_stat(store_statfs(0x4f7451000/0x0/0x4ffc00000, data 0x25f724a/0x26f9000, compress 0x0/0x0/0x0, omap 0x36dd7, meta 0x6079229), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:06.877265+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1518827 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132702208 unmapped: 4456448 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:07.877379+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132702208 unmapped: 4456448 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:08.877539+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 172 heartbeat osd_stat(store_statfs(0x4f7451000/0x0/0x4ffc00000, data 0x25f724a/0x26f9000, compress 0x0/0x0/0x0, omap 0x36dd7, meta 0x6079229), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132702208 unmapped: 4456448 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.880928040s of 39.893814087s, submitted: 15
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:09.877668+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132702208 unmapped: 4456448 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:10.877869+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132702208 unmapped: 4456448 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:11.878023+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1519943 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132702208 unmapped: 4456448 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:12.878165+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132702208 unmapped: 4456448 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:13.878353+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 172 heartbeat osd_stat(store_statfs(0x4f7452000/0x0/0x4ffc00000, data 0x25f72e5/0x26fa000, compress 0x0/0x0/0x0, omap 0x36f3f, meta 0x60790c1), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132702208 unmapped: 4456448 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:14.878473+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132702208 unmapped: 4456448 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:15.878616+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132702208 unmapped: 4456448 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:16.878798+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1519655 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132702208 unmapped: 4456448 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:17.878972+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132702208 unmapped: 4456448 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:18.879146+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 172 ms_handle_reset con 0x55a03cfeb800 session 0x55a03d32ae00
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: handle_auth_request added challenge on 0x55a03d066800
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132710400 unmapped: 4448256 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:19.879343+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 172 heartbeat osd_stat(store_statfs(0x4f7452000/0x0/0x4ffc00000, data 0x25f734a/0x26fa000, compress 0x0/0x0/0x0, omap 0x3717f, meta 0x6078e81), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 132710400 unmapped: 4448256 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.852937698s of 10.869199753s, submitted: 8
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 172 handle_osd_map epochs [172,173], i have 173, src has [1,173]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:20.880626+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133767168 unmapped: 3391488 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_map Got map version 18
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:21.880822+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1524267 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133865472 unmapped: 3293184 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:22.880996+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133865472 unmapped: 3293184 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_map Got map version 19
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:23.881221+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 3219456 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:24.881419+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f7450000/0x0/0x4ffc00000, data 0x25f8fe3/0x26fc000, compress 0x0/0x0/0x0, omap 0x375b7, meta 0x6078a49), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 3219456 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:25.881647+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 3219456 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:26.881817+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1523101 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133980160 unmapped: 3178496 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 173 heartbeat osd_stat(store_statfs(0x4f7450000/0x0/0x4ffc00000, data 0x25f8fe3/0x26fc000, compress 0x0/0x0/0x0, omap 0x375b7, meta 0x6078a49), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:27.882038+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 3162112 heap: 137158656 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:28.882276+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 134037504 unmapped: 4169728 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:29.882474+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 134094848 unmapped: 4112384 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:30.882692+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 173 handle_osd_map epochs [174,174], i have 173, src has [1,174]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.447680473s of 10.781259537s, submitted: 147
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 134111232 unmapped: 4096000 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 174 heartbeat osd_stat(store_statfs(0x4f7450000/0x0/0x4ffc00000, data 0x25f8fe3/0x26fc000, compress 0x0/0x0/0x0, omap 0x375b7, meta 0x6078a49), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:31.882924+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1526291 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 134111232 unmapped: 4096000 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:32.883086+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 174 heartbeat osd_stat(store_statfs(0x4f744b000/0x0/0x4ffc00000, data 0x25faa62/0x26ff000, compress 0x0/0x0/0x0, omap 0x3790a, meta 0x60786f6), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 134111232 unmapped: 4096000 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 174 heartbeat osd_stat(store_statfs(0x4f744b000/0x0/0x4ffc00000, data 0x25faa62/0x26ff000, compress 0x0/0x0/0x0, omap 0x3790a, meta 0x60786f6), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:33.883265+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 134111232 unmapped: 4096000 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:34.883436+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 134111232 unmapped: 4096000 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:35.883705+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 134111232 unmapped: 4096000 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:36.883838+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1526291 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 134111232 unmapped: 4096000 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:37.884018+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 134111232 unmapped: 4096000 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 174 heartbeat osd_stat(store_statfs(0x4f744b000/0x0/0x4ffc00000, data 0x25faa62/0x26ff000, compress 0x0/0x0/0x0, omap 0x37a72, meta 0x607858e), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:38.884177+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 134111232 unmapped: 4096000 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:39.884384+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 134111232 unmapped: 4096000 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:40.884593+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 134111232 unmapped: 4096000 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 rsyslogd[1001]: imjournal from <np0005604375:ceph-osd>: begin to drop messages due to rate-limiting
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:41.884751+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1525715 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 134111232 unmapped: 4096000 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 174 heartbeat osd_stat(store_statfs(0x4f744d000/0x0/0x4ffc00000, data 0x25faa62/0x26ff000, compress 0x0/0x0/0x0, omap 0x37bda, meta 0x6078426), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:42.884962+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 134111232 unmapped: 4096000 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.376166344s of 12.391843796s, submitted: 18
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:43.885104+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 134111232 unmapped: 4096000 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:44.885381+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 134119424 unmapped: 4087808 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:45.885534+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 174 heartbeat osd_stat(store_statfs(0x4f744c000/0x0/0x4ffc00000, data 0x25fabc7/0x2700000, compress 0x0/0x0/0x0, omap 0x37e1a, meta 0x60781e6), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 134119424 unmapped: 4087808 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:46.885696+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1527423 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 134119424 unmapped: 4087808 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:47.886251+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 3039232 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:48.886420+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 3039232 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:49.886584+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 174 handle_osd_map epochs [175,175], i have 174, src has [1,175]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 174 handle_osd_map epochs [174,175], i have 175, src has [1,175]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 175 heartbeat osd_stat(store_statfs(0x4f744d000/0x0/0x4ffc00000, data 0x25fabf6/0x26ff000, compress 0x0/0x0/0x0, omap 0x3817a, meta 0x6077e86), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 3031040 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:50.886751+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 175 heartbeat osd_stat(store_statfs(0x4f7448000/0x0/0x4ffc00000, data 0x25fc7fb/0x2702000, compress 0x0/0x0/0x0, omap 0x38252, meta 0x6077dae), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 3031040 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:51.886934+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530167 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 3031040 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:52.887062+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 3031040 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:53.887227+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 3031040 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.001616478s of 11.065266609s, submitted: 38
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:54.887395+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135184384 unmapped: 3022848 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:55.887552+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 175 handle_osd_map epochs [175,176], i have 175, src has [1,176]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135184384 unmapped: 3022848 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _renew_subs
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:56.887664+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532797 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7445000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x38383, meta 0x6077c7d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 3014656 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:57.887841+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7445000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x38383, meta 0x6077c7d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 3014656 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:58.888005+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7445000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x38383, meta 0x6077c7d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 3014656 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:59.888184+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 3014656 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:00.888394+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 3006464 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:01.888522+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532797 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 3006464 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:02.888778+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 3006464 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:03.888935+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 3006464 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7445000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x38383, meta 0x6077c7d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:04.889061+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7445000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x38383, meta 0x6077c7d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 3006464 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7445000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x38383, meta 0x6077c7d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:05.889131+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 3006464 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:06.889291+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532797 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 3006464 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:07.889507+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 3006464 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:08.889701+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 3006464 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:09.889868+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7445000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x38383, meta 0x6077c7d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 3006464 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:10.890031+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 3006464 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:11.890205+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532797 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 3006464 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:12.890460+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 3006464 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:13.890619+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 3006464 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:14.890808+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 3006464 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:15.890977+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7445000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x38383, meta 0x6077c7d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 3006464 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:16.891140+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532797 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 2998272 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:17.891281+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 2998272 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:18.891510+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 2998272 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7445000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x38383, meta 0x6077c7d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:19.891672+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 2998272 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:20.891868+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 2998272 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:21.892035+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532797 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 2998272 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:22.892183+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 2998272 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:23.892371+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 2998272 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:24.892529+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7445000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x38383, meta 0x6077c7d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 2998272 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:25.892685+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 2998272 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:26.892842+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532797 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 2998272 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:27.893018+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7445000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x38383, meta 0x6077c7d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 2998272 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:28.893219+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 2998272 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:29.893397+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7445000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x38383, meta 0x6077c7d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 2998272 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:30.893530+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 2990080 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:31.893704+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7445000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x38383, meta 0x6077c7d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532797 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 2990080 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:32.893867+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 2990080 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:33.894030+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 2990080 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:34.894189+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 2990080 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:35.894389+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7445000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x38383, meta 0x6077c7d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7445000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x38383, meta 0x6077c7d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 2990080 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:36.894527+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532797 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 2990080 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:37.894680+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 2990080 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:38.894830+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 2990080 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:39.895003+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 2990080 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:40.895205+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 2990080 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:41.895384+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7445000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x38383, meta 0x6077c7d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532797 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 2990080 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:42.895542+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7445000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x38383, meta 0x6077c7d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 2990080 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:43.895687+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 2990080 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:44.895820+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 2981888 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:45.895932+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 2981888 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:46.896087+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532797 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 2981888 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:47.896280+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 2981888 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:48.896612+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7445000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x38383, meta 0x6077c7d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 54.168884277s of 54.184669495s, submitted: 14
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 ms_handle_reset con 0x55a03c61c000 session 0x55a03e63ce00
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7447000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x38383, meta 0x6077c7d), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135667712 unmapped: 2539520 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:49.896791+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135667712 unmapped: 2539520 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:50.896978+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135667712 unmapped: 2539520 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_map Got map version 20
Feb 01 15:23:55 compute-0 ceph-osd[87011]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:51.897154+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532221 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:52.897325+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:53.897536+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:54.897772+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7447000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x37e01, meta 0x60781ff), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:55.897990+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:56.898141+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532221 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:57.898331+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:58.898477+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:59.898662+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:00.898853+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7447000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x37e01, meta 0x60781ff), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:01.899096+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532221 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:02.899238+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:03.899422+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:04.899542+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:05.899741+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7447000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x37e01, meta 0x60781ff), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:06.899934+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532221 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:07.900097+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:08.900336+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:09.900467+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7447000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x37e01, meta 0x60781ff), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:10.900659+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:11.900766+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532221 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:12.900873+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:13.900991+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:14.901130+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:15.901230+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7447000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x37e01, meta 0x60781ff), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:16.901380+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532221 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:17.902167+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:18.902375+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7447000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x37e01, meta 0x60781ff), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:19.902503+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:20.904301+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:21.905122+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135700480 unmapped: 2506752 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:55 compute-0 ceph-osd[87011]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:55 compute-0 ceph-osd[87011]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532221 data_alloc: 218103808 data_used: 11304
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:22.905279+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135782400 unmapped: 2424832 heap: 138207232 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: do_command 'config diff' '{prefix=config diff}'
Feb 01 15:23:55 compute-0 ceph-osd[87011]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 01 15:23:55 compute-0 ceph-osd[87011]: do_command 'config show' '{prefix=config show}'
Feb 01 15:23:55 compute-0 ceph-osd[87011]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 01 15:23:55 compute-0 ceph-osd[87011]: do_command 'counter dump' '{prefix=counter dump}'
Feb 01 15:23:55 compute-0 ceph-osd[87011]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 01 15:23:55 compute-0 ceph-osd[87011]: do_command 'counter schema' '{prefix=counter schema}'
Feb 01 15:23:55 compute-0 ceph-osd[87011]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:23.905357+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135938048 unmapped: 3317760 heap: 139255808 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: tick
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_tickets
Feb 01 15:23:55 compute-0 ceph-osd[87011]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:24.905473+0000)
Feb 01 15:23:55 compute-0 ceph-osd[87011]: prioritycache tune_memory target: 4294967296 mapped: 135929856 unmapped: 3325952 heap: 139255808 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:55 compute-0 ceph-osd[87011]: osd.1 176 heartbeat osd_stat(store_statfs(0x4f7447000/0x0/0x4ffc00000, data 0x25fe27a/0x2705000, compress 0x0/0x0/0x0, omap 0x37e01, meta 0x60781ff), peers [0,2] op hist [])
Feb 01 15:23:55 compute-0 ceph-osd[87011]: do_command 'log dump' '{prefix=log dump}'
Feb 01 15:23:55 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Feb 01 15:23:55 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/533297859' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Feb 01 15:23:55 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 01 15:23:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Feb 01 15:23:56 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1670990106' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Feb 01 15:23:56 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2811723650' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Feb 01 15:23:56 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/533297859' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Feb 01 15:23:56 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1670990106' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Feb 01 15:23:56 compute-0 nova_compute[238794]: 2026-02-01 15:23:56.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:23:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:23:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Feb 01 15:23:56 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1381964464' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Feb 01 15:23:56 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Feb 01 15:23:56 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/358026576' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Feb 01 15:23:56 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Feb 01 15:23:56 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/155677763' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Feb 01 15:23:57 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Feb 01 15:23:57 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3094889236' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Feb 01 15:23:57 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1381964464' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Feb 01 15:23:57 compute-0 ceph-mon[75179]: pgmap v1176: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:57 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/358026576' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Feb 01 15:23:57 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/155677763' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Feb 01 15:23:57 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3094889236' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Feb 01 15:23:57 compute-0 nova_compute[238794]: 2026-02-01 15:23:57.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:23:57 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Feb 01 15:23:57 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/972658090' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Feb 01 15:23:57 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Feb 01 15:23:57 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2674726184' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Feb 01 15:23:57 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Feb 01 15:23:57 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1324027002' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Feb 01 15:23:58 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Feb 01 15:23:58 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4130637122' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Feb 01 15:23:58 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/972658090' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Feb 01 15:23:58 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2674726184' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Feb 01 15:23:58 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1324027002' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Feb 01 15:23:58 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/4130637122' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Feb 01 15:23:58 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Feb 01 15:23:58 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1430327700' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Feb 01 15:23:58 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb 01 15:23:58 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2340457074' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Feb 01 15:23:58 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:58 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14622 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 01 15:23:58 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Feb 01 15:23:58 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4014751040' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Feb 01 15:23:59 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14626 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 01 15:23:59 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1430327700' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Feb 01 15:23:59 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2340457074' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Feb 01 15:23:59 compute-0 ceph-mon[75179]: pgmap v1177: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:23:59 compute-0 ceph-mon[75179]: from='client.14622 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 01 15:23:59 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/4014751040' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Feb 01 15:23:59 compute-0 ceph-mon[75179]: from='client.14626 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 01 15:23:59 compute-0 nova_compute[238794]: 2026-02-01 15:23:59.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:23:59 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14628 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:59 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14630 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 01 15:23:59 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.eusbkm", "name": "rgw_frontends"} v 0)
Feb 01 15:23:59 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.eusbkm", "name": "rgw_frontends"} : dispatch
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 103 pg[9.19( v 57'487 (0'0,57'487] lb MIN local-lis/les=100/101 n=6 ec=48/32 lis/c=100/55 les/c/f=101/56/0 sis=102) [2] r=-1 lpr=102 pi=[55,102)/1 crt=57'487 lcod 57'486 unknown NOTIFY mbc={}] exit Started 1.249613 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:26.423770+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 811008 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 735334 data_alloc: 218103808 data_used: 4799
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:27.423946+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:53:57.061016+0000 osd.0 (osd.0) 58 : cluster [DBG] 11.f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:53:57.071536+0000 osd.0 (osd.0) 59 : cluster [DBG] 11.f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 59)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:53:57.061016+0000 osd.0 (osd.0) 58 : cluster [DBG] 11.f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:53:57.071536+0000 osd.0 (osd.0) 59 : cluster [DBG] 11.f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 819200 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 103 heartbeat osd_stat(store_statfs(0x4fceef000/0x0/0x4ffc00000, data 0x9b6ff/0x13d000, compress 0x0/0x0/0x0, omap 0x1237f, meta 0x2bbdc81), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:28.424204+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:53:58.011943+0000 osd.0 (osd.0) 60 : cluster [DBG] 5.3 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:53:58.022498+0000 osd.0 (osd.0) 61 : cluster [DBG] 5.3 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 61)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:53:58.011943+0000 osd.0 (osd.0) 60 : cluster [DBG] 5.3 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:53:58.022498+0000 osd.0 (osd.0) 61 : cluster [DBG] 5.3 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 811008 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:29.424435+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 942080 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:30.424574+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:00.068753+0000 osd.0 (osd.0) 62 : cluster [DBG] 3.6 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:00.079323+0000 osd.0 (osd.0) 63 : cluster [DBG] 3.6 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 63)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:00.068753+0000 osd.0 (osd.0) 62 : cluster [DBG] 3.6 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:00.079323+0000 osd.0 (osd.0) 63 : cluster [DBG] 3.6 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 933888 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:31.424762+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 933888 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 741385 data_alloc: 218103808 data_used: 4799
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.e scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.e scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:32.425826+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:02.137148+0000 osd.0 (osd.0) 64 : cluster [DBG] 11.e scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:02.147788+0000 osd.0 (osd.0) 65 : cluster [DBG] 11.e scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 103 heartbeat osd_stat(store_statfs(0x4fceef000/0x0/0x4ffc00000, data 0x9b6ff/0x13d000, compress 0x0/0x0/0x0, omap 0x1237f, meta 0x2bbdc81), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 65)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:02.137148+0000 osd.0 (osd.0) 64 : cluster [DBG] 11.e scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:02.147788+0000 osd.0 (osd.0) 65 : cluster [DBG] 11.e scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 909312 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:33.425991+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 909312 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 105 pg[9.1c(unlocked)] enter Initial
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=0 lpr=0 pi=[79,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000084 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=0 lpr=0 pi=[79,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=0 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000026
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=0 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=0 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=0 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=0 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=0 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=0 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=0 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=0 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000154 1 0.000038
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=0 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=0 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000042 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=0 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000249 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=0 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:34.426165+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 105 handle_osd_map epochs [105,106], i have 106, src has [1,106]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 106 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=0 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.791613 2 0.000118
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 106 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=0 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.791929 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 106 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=0 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.791963 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 106 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=105) [0] r=0 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 106 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 106 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000220 1 0.000308
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 106 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 106 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 106 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 106 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000052 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 106 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 106 handle_osd_map epochs [106,106], i have 106, src has [1,106]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 884736 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fcee2000/0x0/0x4ffc00000, data 0xa08b8/0x146000, compress 0x0/0x0/0x0, omap 0x12b08, meta 0x2bbd4f8), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:35.426329+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 106 handle_osd_map epochs [106,107], i have 106, src has [1,107]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 1925120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 107 pg[9.1c( v 57'487 lc 0'0 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 crt=57'487 remapped NOTIFY m=9 mbc={}] exit Started/Stray 1.017609 6 0.000147
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 107 pg[9.1c( v 57'487 lc 0'0 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 crt=57'487 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 107 pg[9.1c( v 57'487 lc 0'0 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=79/79 les/c/f=80/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 crt=57'487 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 107 pg[9.1c( v 57'487 lc 38'124 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 pct=0'0 crt=57'487 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.004590 3 0.000271
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 107 pg[9.1c( v 57'487 lc 38'124 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 pct=0'0 crt=57'487 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 107 pg[9.1c( v 57'487 lc 38'124 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 pct=0'0 crt=57'487 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000171 1 0.000041
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 107 pg[9.1c( v 57'487 lc 38'124 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 pct=0'0 crt=57'487 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepRecovering
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.956189156s of 10.003722191s, submitted: 27
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 pct=0'0 crt=57'487 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.064199 1 0.000069
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 107 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 pct=0'0 crt=57'487 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:36.426480+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 1974272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 770971 data_alloc: 218103808 data_used: 4799
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 pct=0'0 crt=57'487 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.963932 1 0.000055
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 pct=0'0 crt=57'487 active+remapped mbc={}] exit Started/ReplicaActive 1.033034 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 pct=0'0 crt=57'487 active+remapped mbc={}] exit Started 2.050779 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[79,106)/1 pct=0'0 crt=57'487 active+remapped mbc={}] enter Reset
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 pct=0'0 crt=57'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 unknown mbc={}] exit Reset 0.000453 1 0.000524
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 unknown mbc={}] enter Started
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 unknown mbc={}] enter Start
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 unknown mbc={}] exit Start 0.000095 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 unknown mbc={}] enter Started/Primary
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000050 1 0.000229
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=0/0 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Feb 01 15:23:59 compute-0 ceph-osd[85969]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001318 3 0.000070
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000018 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 108 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:37.426802+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 108 handle_osd_map epochs [108,109], i have 108, src has [1,109]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 108 handle_osd_map epochs [108,109], i have 109, src has [1,109]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995275 2 0.000116
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996772 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=106/107 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=108/109 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=108/109 n=6 ec=48/32 lis/c=106/79 les/c/f=107/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=108/109 n=6 ec=48/32 lis/c=108/79 les/c/f=109/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004548 3 0.000260
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=108/109 n=6 ec=48/32 lis/c=108/79 les/c/f=109/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=108/109 n=6 ec=48/32 lis/c=108/79 les/c/f=109/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 109 pg[9.1c( v 57'487 (0'0,57'487] local-lis/les=108/109 n=6 ec=48/32 lis/c=108/79 les/c/f=109/80/0 sis=108) [0] r=0 lpr=108 pi=[79,108)/1 crt=57'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 1974272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fced6000/0x0/0x4ffc00000, data 0xa5975/0x152000, compress 0x0/0x0/0x0, omap 0x132a3, meta 0x2bbcd5d), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:38.426951+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 1974272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:39.427100+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 1974272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:40.427271+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 1974272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:41.427462+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 777703 data_alloc: 218103808 data_used: 4799
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 1974272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:42.427608+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 1966080 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:43.427771+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fced6000/0x0/0x4ffc00000, data 0xa5975/0x152000, compress 0x0/0x0/0x0, omap 0x132a3, meta 0x2bbcd5d), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 1957888 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 110 handle_osd_map epochs [110,111], i have 110, src has [1,111]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 111 pg[9.1e(unlocked)] enter Initial
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=0 lpr=0 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000135 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=0 lpr=0 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000028 1 0.000059
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000584 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000173 1 0.000735
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000077 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000335 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:44.427918+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:14.140118+0000 osd.0 (osd.0) 66 : cluster [DBG] 5.2 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:14.150620+0000 osd.0 (osd.0) 67 : cluster [DBG] 5.2 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 1941504 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 111 handle_osd_map epochs [111,112], i have 111, src has [1,112]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 67)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:14.140118+0000 osd.0 (osd.0) 66 : cluster [DBG] 5.2 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:14.150620+0000 osd.0 (osd.0) 67 : cluster [DBG] 5.2 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.006075 2 0.000212
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.006483 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.007127 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=111) [0] r=0 lpr=111 pi=[65,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000063 1 0.000098
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 112 handle_osd_map epochs [112,112], i have 112, src has [1,112]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 112 heartbeat osd_stat(store_statfs(0x4fced0000/0x0/0x4ffc00000, data 0xa90ad/0x158000, compress 0x0/0x0/0x0, omap 0x137bf, meta 0x2bbc841), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:45.428094+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 1941504 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 112 heartbeat osd_stat(store_statfs(0x4fcecf000/0x0/0x4ffc00000, data 0xaab2e/0x15b000, compress 0x0/0x0/0x0, omap 0x13a50, meta 0x2bbc5b0), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 112 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.290892601s of 10.334468842s, submitted: 20
Feb 01 15:23:59 compute-0 ceph-osd[85969]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 113 pg[9.1e( v 57'485 lc 0'0 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=57'485 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.268677 5 0.000063
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 113 pg[9.1e( v 57'485 lc 0'0 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=57'485 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 113 pg[9.1e( v 57'485 lc 0'0 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=65/65 les/c/f=66/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 crt=57'485 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 113 pg[9.1e( v 57'485 lc 38'305 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 pct=0'0 crt=57'485 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.004077 4 0.000103
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 113 pg[9.1e( v 57'485 lc 38'305 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 pct=0'0 crt=57'485 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 113 pg[9.1e( v 57'485 lc 38'305 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 pct=0'0 crt=57'485 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000046 1 0.000034
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 113 pg[9.1e( v 57'485 lc 38'305 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 pct=0'0 crt=57'485 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:46.428193+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 pct=0'0 crt=57'485 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.044885 1 0.000047
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 113 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 pct=0'0 crt=57'485 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 801806 data_alloc: 218103808 data_used: 4799
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 1875968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 pct=0'0 crt=57'485 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.706673 1 0.000037
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 pct=0'0 crt=57'485 active+remapped mbc={}] exit Started/ReplicaActive 0.755789 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 pct=0'0 crt=57'485 active+remapped mbc={}] exit Started 2.024497 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[65,112)/1 pct=0'0 crt=57'485 active+remapped mbc={}] enter Reset
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 pct=0'0 crt=57'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 unknown mbc={}] exit Reset 0.000107 1 0.000156
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 unknown mbc={}] enter Started
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 unknown mbc={}] enter Start
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 unknown mbc={}] enter Started/Primary
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003132 2 0.000049
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=0/0 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 114 handle_osd_map epochs [114,114], i have 114, src has [1,114]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Feb 01 15:23:59 compute-0 ceph-osd[85969]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001434 2 0.000066
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 114 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:47.428391+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 1867776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 114 handle_osd_map epochs [114,115], i have 114, src has [1,115]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 114 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004427 2 0.000056
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009071 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=112/113 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=114/115 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=114/115 n=6 ec=48/32 lis/c=112/65 les/c/f=113/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=114/115 n=6 ec=48/32 lis/c=114/65 les/c/f=115/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001974 4 0.000125
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=114/115 n=6 ec=48/32 lis/c=114/65 les/c/f=115/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=114/115 n=6 ec=48/32 lis/c=114/65 les/c/f=115/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 pg_epoch: 115 pg[9.1e( v 57'485 (0'0,57'485] local-lis/les=114/115 n=6 ec=48/32 lis/c=114/65 les/c/f=115/66/0 sis=114) [0] r=0 lpr=114 pi=[65,114)/1 crt=57'485 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:48.428675+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:18.154707+0000 osd.0 (osd.0) 68 : cluster [DBG] 10.17 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:18.165219+0000 osd.0 (osd.0) 69 : cluster [DBG] 10.17 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 1867776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 69)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:18.154707+0000 osd.0 (osd.0) 68 : cluster [DBG] 10.17 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:18.165219+0000 osd.0 (osd.0) 69 : cluster [DBG] 10.17 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:49.428941+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 1859584 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:50.429085+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 1851392 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:51.429227+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:21.124834+0000 osd.0 (osd.0) 70 : cluster [DBG] 3.3 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:21.135443+0000 osd.0 (osd.0) 71 : cluster [DBG] 3.3 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 811700 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 1802240 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 71)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:21.124834+0000 osd.0 (osd.0) 70 : cluster [DBG] 3.3 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:21.135443+0000 osd.0 (osd.0) 71 : cluster [DBG] 3.3 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 115 heartbeat osd_stat(store_statfs(0x4fcec5000/0x0/0x4ffc00000, data 0xafae1/0x165000, compress 0x0/0x0/0x0, omap 0x1420f, meta 0x2bbbdf1), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 115 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:52.429451+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 1785856 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:53.429613+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 1769472 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:54.429713+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 1761280 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:55.429901+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:25.059690+0000 osd.0 (osd.0) 72 : cluster [DBG] 2.1f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:25.070264+0000 osd.0 (osd.0) 73 : cluster [DBG] 2.1f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 1761280 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 73)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:25.059690+0000 osd.0 (osd.0) 72 : cluster [DBG] 2.1f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:25.070264+0000 osd.0 (osd.0) 73 : cluster [DBG] 2.1f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 118 handle_osd_map epochs [119,120], i have 118, src has [1,120]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.934921265s of 10.001466751s, submitted: 28
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:56.430251+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828879 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 1753088 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:57.430379+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 1720320 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb4000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:58.430506+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 1712128 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:53:59.430626+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 1712128 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:00.430730+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:30.011048+0000 osd.0 (osd.0) 74 : cluster [DBG] 5.5 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:30.021752+0000 osd.0 (osd.0) 75 : cluster [DBG] 5.5 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 1695744 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 75)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:30.011048+0000 osd.0 (osd.0) 74 : cluster [DBG] 5.5 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:30.021752+0000 osd.0 (osd.0) 75 : cluster [DBG] 5.5 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:01.430929+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:31.017929+0000 osd.0 (osd.0) 76 : cluster [DBG] 2.2 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:31.028506+0000 osd.0 (osd.0) 77 : cluster [DBG] 2.2 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 832085 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 1695744 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 77)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:31.017929+0000 osd.0 (osd.0) 76 : cluster [DBG] 2.2 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:31.028506+0000 osd.0 (osd.0) 77 : cluster [DBG] 2.2 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:02.431176+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 1687552 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:03.431405+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 1687552 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:04.431529+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:34.043718+0000 osd.0 (osd.0) 78 : cluster [DBG] 10.7 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:34.054343+0000 osd.0 (osd.0) 79 : cluster [DBG] 10.7 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 1687552 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 79)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:34.043718+0000 osd.0 (osd.0) 78 : cluster [DBG] 10.7 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:34.054343+0000 osd.0 (osd.0) 79 : cluster [DBG] 10.7 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:05.431711+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 1679360 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:06.431847+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:35.973753+0000 osd.0 (osd.0) 80 : cluster [DBG] 2.f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:35.984277+0000 osd.0 (osd.0) 81 : cluster [DBG] 2.f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836909 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 1679360 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 81)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:35.973753+0000 osd.0 (osd.0) 80 : cluster [DBG] 2.f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:35.984277+0000 osd.0 (osd.0) 81 : cluster [DBG] 2.f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:07.432088+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 1671168 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:08.432246+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 1671168 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:09.432400+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.547435760s of 13.569758415s, submitted: 10
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 1646592 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:10.432571+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:39.981174+0000 osd.0 (osd.0) 82 : cluster [DBG] 5.4 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:39.991714+0000 osd.0 (osd.0) 83 : cluster [DBG] 5.4 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 83)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:39.981174+0000 osd.0 (osd.0) 82 : cluster [DBG] 5.4 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:39.991714+0000 osd.0 (osd.0) 83 : cluster [DBG] 5.4 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 1646592 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:11.432815+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 839320 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 1646592 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:12.432961+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 1630208 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:13.433159+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 1630208 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:14.433411+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:43.978846+0000 osd.0 (osd.0) 84 : cluster [DBG] 7.18 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:43.988862+0000 osd.0 (osd.0) 85 : cluster [DBG] 7.18 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 85)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:43.978846+0000 osd.0 (osd.0) 84 : cluster [DBG] 7.18 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:43.988862+0000 osd.0 (osd.0) 85 : cluster [DBG] 7.18 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 1605632 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:15.434449+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:45.019611+0000 osd.0 (osd.0) 86 : cluster [DBG] 11.14 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:45.030202+0000 osd.0 (osd.0) 87 : cluster [DBG] 11.14 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 87)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:45.019611+0000 osd.0 (osd.0) 86 : cluster [DBG] 11.14 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:45.030202+0000 osd.0 (osd.0) 87 : cluster [DBG] 11.14 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 1589248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:16.435185+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 844148 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 1589248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:17.436158+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1572864 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:18.436422+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:48.065593+0000 osd.0 (osd.0) 88 : cluster [DBG] 3.1 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:48.076254+0000 osd.0 (osd.0) 89 : cluster [DBG] 3.1 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 89)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:48.065593+0000 osd.0 (osd.0) 88 : cluster [DBG] 3.1 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:48.076254+0000 osd.0 (osd.0) 89 : cluster [DBG] 3.1 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1572864 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:19.437403+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:49.053230+0000 osd.0 (osd.0) 90 : cluster [DBG] 10.8 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:49.063780+0000 osd.0 (osd.0) 91 : cluster [DBG] 10.8 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 91)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:49.053230+0000 osd.0 (osd.0) 90 : cluster [DBG] 10.8 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:49.063780+0000 osd.0 (osd.0) 91 : cluster [DBG] 10.8 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.019369125s of 10.040273666s, submitted: 10
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1572864 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:20.437573+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:50.021466+0000 osd.0 (osd.0) 92 : cluster [DBG] 5.7 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:54:50.032105+0000 osd.0 (osd.0) 93 : cluster [DBG] 5.7 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 93)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:50.021466+0000 osd.0 (osd.0) 92 : cluster [DBG] 5.7 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:54:50.032105+0000 osd.0 (osd.0) 93 : cluster [DBG] 5.7 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 1564672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:21.437760+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 851383 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 1564672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:22.438080+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 1556480 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:23.438241+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 1556480 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:24.438378+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 1556480 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:25.438657+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1548288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:26.438960+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 851383 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1548288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:27.439239+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1540096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:28.439483+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1540096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:29.439672+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1531904 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:30.440018+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 1523712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:31.440276+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 851383 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 1523712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:32.440383+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.b scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.824834824s of 12.828289986s, submitted: 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.b scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1515520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:33.440561+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:02.849805+0000 osd.0 (osd.0) 94 : cluster [DBG] 8.b scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:02.860349+0000 osd.0 (osd.0) 95 : cluster [DBG] 8.b scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 95)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:02.849805+0000 osd.0 (osd.0) 94 : cluster [DBG] 8.b scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:02.860349+0000 osd.0 (osd.0) 95 : cluster [DBG] 8.b scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1515520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:34.440768+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:03.839888+0000 osd.0 (osd.0) 96 : cluster [DBG] 8.10 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:03.850488+0000 osd.0 (osd.0) 97 : cluster [DBG] 8.10 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 97)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:03.839888+0000 osd.0 (osd.0) 96 : cluster [DBG] 8.10 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:03.850488+0000 osd.0 (osd.0) 97 : cluster [DBG] 8.10 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 1482752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:35.441127+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 1474560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:36.441379+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 856207 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 1466368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:37.441600+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1458176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:38.441850+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:07.842899+0000 osd.0 (osd.0) 98 : cluster [DBG] 7.1f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:07.853344+0000 osd.0 (osd.0) 99 : cluster [DBG] 7.1f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 99)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:07.842899+0000 osd.0 (osd.0) 98 : cluster [DBG] 7.1f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:07.853344+0000 osd.0 (osd.0) 99 : cluster [DBG] 7.1f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1441792 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:39.442173+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1441792 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:40.442333+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:09.857858+0000 osd.0 (osd.0) 100 : cluster [DBG] 7.4 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:09.868425+0000 osd.0 (osd.0) 101 : cluster [DBG] 7.4 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 101)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:09.857858+0000 osd.0 (osd.0) 100 : cluster [DBG] 7.4 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:09.868425+0000 osd.0 (osd.0) 101 : cluster [DBG] 7.4 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73007104 unmapped: 1433600 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:41.442490+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 861031 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73007104 unmapped: 1433600 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:42.442632+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 1425408 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:43.442749+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 1425408 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:44.442873+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 1425408 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:45.443052+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 1417216 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:46.443210+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.009348869s of 14.024394989s, submitted: 8
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 863444 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 1417216 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:47.443363+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:16.874115+0000 osd.0 (osd.0) 102 : cluster [DBG] 2.19 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:16.884531+0000 osd.0 (osd.0) 103 : cluster [DBG] 2.19 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 103)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:16.874115+0000 osd.0 (osd.0) 102 : cluster [DBG] 2.19 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:16.884531+0000 osd.0 (osd.0) 103 : cluster [DBG] 2.19 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 1409024 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:48.443543+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 1409024 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:49.443693+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 1400832 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:50.443843+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:19.944759+0000 osd.0 (osd.0) 104 : cluster [DBG] 5.1e scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:19.955353+0000 osd.0 (osd.0) 105 : cluster [DBG] 5.1e scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 105)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:19.944759+0000 osd.0 (osd.0) 104 : cluster [DBG] 5.1e scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:19.955353+0000 osd.0 (osd.0) 105 : cluster [DBG] 5.1e scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 1392640 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:51.444053+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865857 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 1384448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:52.444220+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 1376256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:53.444382+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 1376256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:54.444551+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 1368064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:55.444769+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:24.935334+0000 osd.0 (osd.0) 106 : cluster [DBG] 2.18 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:24.945851+0000 osd.0 (osd.0) 107 : cluster [DBG] 2.18 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 107)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:24.935334+0000 osd.0 (osd.0) 106 : cluster [DBG] 2.18 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:24.945851+0000 osd.0 (osd.0) 107 : cluster [DBG] 2.18 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1441792 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:56.444972+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 868270 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1441792 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:57.445118+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1441792 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:58.445231+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.098445892s of 12.108797073s, submitted: 6
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73007104 unmapped: 1433600 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:54:59.445371+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:28.982998+0000 osd.0 (osd.0) 108 : cluster [DBG] 10.4 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:28.993542+0000 osd.0 (osd.0) 109 : cluster [DBG] 10.4 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 109)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:28.982998+0000 osd.0 (osd.0) 108 : cluster [DBG] 10.4 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:28.993542+0000 osd.0 (osd.0) 109 : cluster [DBG] 10.4 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73007104 unmapped: 1433600 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:00.445552+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 1425408 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:01.445733+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:31.017434+0000 osd.0 (osd.0) 110 : cluster [DBG] 7.6 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:31.027968+0000 osd.0 (osd.0) 111 : cluster [DBG] 7.6 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 111)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:31.017434+0000 osd.0 (osd.0) 110 : cluster [DBG] 7.6 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:31.027968+0000 osd.0 (osd.0) 111 : cluster [DBG] 7.6 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.c scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.c scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 875505 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 1417216 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:02.445941+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:32.007471+0000 osd.0 (osd.0) 112 : cluster [DBG] 3.c scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:32.018345+0000 osd.0 (osd.0) 113 : cluster [DBG] 3.c scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 113)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:32.007471+0000 osd.0 (osd.0) 112 : cluster [DBG] 3.c scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:32.018345+0000 osd.0 (osd.0) 113 : cluster [DBG] 3.c scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 1409024 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:03.446253+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:32.989410+0000 osd.0 (osd.0) 114 : cluster [DBG] 11.4 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:32.999874+0000 osd.0 (osd.0) 115 : cluster [DBG] 11.4 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 115)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:32.989410+0000 osd.0 (osd.0) 114 : cluster [DBG] 11.4 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:32.999874+0000 osd.0 (osd.0) 115 : cluster [DBG] 11.4 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 1409024 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:04.446476+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 1400832 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:05.446716+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:06.446823+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 1392640 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877918 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:07.447020+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 1392640 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:08.447156+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 1392640 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:09.447278+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 1384448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:10.447405+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 1384448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.002461433s of 12.017446518s, submitted: 8
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:11.447556+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:41.000457+0000 osd.0 (osd.0) 116 : cluster [DBG] 7.9 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:41.011096+0000 osd.0 (osd.0) 117 : cluster [DBG] 7.9 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 1368064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 117)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:41.000457+0000 osd.0 (osd.0) 116 : cluster [DBG] 7.9 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:41.011096+0000 osd.0 (osd.0) 117 : cluster [DBG] 7.9 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880329 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:12.447816+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 1368064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:13.447969+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 1359872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:14.448090+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 1359872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:15.448274+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:45.065423+0000 osd.0 (osd.0) 118 : cluster [DBG] 11.1 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:45.075951+0000 osd.0 (osd.0) 119 : cluster [DBG] 11.1 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 1351680 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 119)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:45.065423+0000 osd.0 (osd.0) 118 : cluster [DBG] 11.1 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:45.075951+0000 osd.0 (osd.0) 119 : cluster [DBG] 11.1 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:16.448485+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 1327104 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 882742 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:17.448629+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 1327104 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:18.448810+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:48.138089+0000 osd.0 (osd.0) 120 : cluster [DBG] 3.f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:48.148629+0000 osd.0 (osd.0) 121 : cluster [DBG] 3.f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 1318912 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 121)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:48.138089+0000 osd.0 (osd.0) 120 : cluster [DBG] 3.f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:48.148629+0000 osd.0 (osd.0) 121 : cluster [DBG] 3.f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:19.449018+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 1318912 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:20.449188+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 1302528 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:21.449389+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 1302528 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885153 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:22.449580+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 1302528 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:23.449926+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 1294336 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.083649635s of 13.093296051s, submitted: 6
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:24.450107+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:54.093782+0000 osd.0 (osd.0) 122 : cluster [DBG] 8.9 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:54.105740+0000 osd.0 (osd.0) 123 : cluster [DBG] 8.9 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 1269760 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 123)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:54.093782+0000 osd.0 (osd.0) 122 : cluster [DBG] 8.9 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:54.105740+0000 osd.0 (osd.0) 123 : cluster [DBG] 8.9 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:25.450432+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 1269760 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:26.450569+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 1253376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889977 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:27.450740+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:57.091291+0000 osd.0 (osd.0) 124 : cluster [DBG] 11.6 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:57.101828+0000 osd.0 (osd.0) 125 : cluster [DBG] 11.6 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 1253376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 125)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:57.091291+0000 osd.0 (osd.0) 124 : cluster [DBG] 11.6 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:57.101828+0000 osd.0 (osd.0) 125 : cluster [DBG] 11.6 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:28.451025+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 1245184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:29.451182+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 1245184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:30.451378+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:55:59.994044+0000 osd.0 (osd.0) 126 : cluster [DBG] 7.f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:00.004631+0000 osd.0 (osd.0) 127 : cluster [DBG] 7.f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 1236992 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 127)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:55:59.994044+0000 osd.0 (osd.0) 126 : cluster [DBG] 7.f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:00.004631+0000 osd.0 (osd.0) 127 : cluster [DBG] 7.f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:31.451726+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:01.043119+0000 osd.0 (osd.0) 128 : cluster [DBG] 7.3 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:01.053649+0000 osd.0 (osd.0) 129 : cluster [DBG] 7.3 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 1228800 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 129)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:01.043119+0000 osd.0 (osd.0) 128 : cluster [DBG] 7.3 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:01.053649+0000 osd.0 (osd.0) 129 : cluster [DBG] 7.3 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897212 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:32.451987+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:01.998179+0000 osd.0 (osd.0) 130 : cluster [DBG] 3.17 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:02.008725+0000 osd.0 (osd.0) 131 : cluster [DBG] 3.17 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1220608 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 131)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:01.998179+0000 osd.0 (osd.0) 130 : cluster [DBG] 3.17 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:02.008725+0000 osd.0 (osd.0) 131 : cluster [DBG] 3.17 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:33.452203+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1220608 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:34.452407+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:03.982223+0000 osd.0 (osd.0) 132 : cluster [DBG] 7.13 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:03.992859+0000 osd.0 (osd.0) 133 : cluster [DBG] 7.13 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1212416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 133)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:03.982223+0000 osd.0 (osd.0) 132 : cluster [DBG] 7.13 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:03.992859+0000 osd.0 (osd.0) 133 : cluster [DBG] 7.13 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.898438454s of 10.926205635s, submitted: 12
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:35.452627+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:05.020043+0000 osd.0 (osd.0) 134 : cluster [DBG] 2.16 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:05.030467+0000 osd.0 (osd.0) 135 : cluster [DBG] 2.16 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1204224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 135)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:05.020043+0000 osd.0 (osd.0) 134 : cluster [DBG] 2.16 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:05.030467+0000 osd.0 (osd.0) 135 : cluster [DBG] 2.16 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:36.453072+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904453 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:37.453198+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:07.028788+0000 osd.0 (osd.0) 136 : cluster [DBG] 10.1e scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:07.039346+0000 osd.0 (osd.0) 137 : cluster [DBG] 10.1e scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 137)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:07.028788+0000 osd.0 (osd.0) 136 : cluster [DBG] 10.1e scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:07.039346+0000 osd.0 (osd.0) 137 : cluster [DBG] 10.1e scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:38.453334+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:39.453469+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:09.078381+0000 osd.0 (osd.0) 138 : cluster [DBG] 3.9 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:09.088878+0000 osd.0 (osd.0) 139 : cluster [DBG] 3.9 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 139)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:09.078381+0000 osd.0 (osd.0) 138 : cluster [DBG] 3.9 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:09.088878+0000 osd.0 (osd.0) 139 : cluster [DBG] 3.9 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:40.453674+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:10.099328+0000 osd.0 (osd.0) 140 : cluster [DBG] 8.1d scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:10.109881+0000 osd.0 (osd.0) 141 : cluster [DBG] 8.1d scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 141)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:10.099328+0000 osd.0 (osd.0) 140 : cluster [DBG] 8.1d scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:10.109881+0000 osd.0 (osd.0) 141 : cluster [DBG] 8.1d scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:41.453912+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:11.063633+0000 osd.0 (osd.0) 142 : cluster [DBG] 8.1f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:11.074196+0000 osd.0 (osd.0) 143 : cluster [DBG] 8.1f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 1138688 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 143)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:11.063633+0000 osd.0 (osd.0) 142 : cluster [DBG] 8.1f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:11.074196+0000 osd.0 (osd.0) 143 : cluster [DBG] 8.1f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:42.454085+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:12.102858+0000 osd.0 (osd.0) 144 : cluster [DBG] 11.10 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:12.113360+0000 osd.0 (osd.0) 145 : cluster [DBG] 11.10 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914105 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 1138688 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 145)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:12.102858+0000 osd.0 (osd.0) 144 : cluster [DBG] 11.10 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:12.113360+0000 osd.0 (osd.0) 145 : cluster [DBG] 11.10 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:43.454268+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:13.113078+0000 osd.0 (osd.0) 146 : cluster [DBG] 8.18 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:13.123676+0000 osd.0 (osd.0) 147 : cluster [DBG] 8.18 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1130496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 147)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:13.113078+0000 osd.0 (osd.0) 146 : cluster [DBG] 8.18 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:13.123676+0000 osd.0 (osd.0) 147 : cluster [DBG] 8.18 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:44.454475+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1130496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:45.454616+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1130496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.056254387s of 11.081245422s, submitted: 14
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:46.454754+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:16.101348+0000 osd.0 (osd.0) 148 : cluster [DBG] 3.15 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:16.111945+0000 osd.0 (osd.0) 149 : cluster [DBG] 3.15 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1122304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 149)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:16.101348+0000 osd.0 (osd.0) 148 : cluster [DBG] 3.15 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:16.111945+0000 osd.0 (osd.0) 149 : cluster [DBG] 3.15 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:47.454940+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:17.067746+0000 osd.0 (osd.0) 150 : cluster [DBG] 8.1a scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:17.078516+0000 osd.0 (osd.0) 151 : cluster [DBG] 8.1a scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921344 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1114112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 151)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:17.067746+0000 osd.0 (osd.0) 150 : cluster [DBG] 8.1a scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:17.078516+0000 osd.0 (osd.0) 151 : cluster [DBG] 8.1a scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:48.455158+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:18.036053+0000 osd.0 (osd.0) 152 : cluster [DBG] 3.12 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:18.046640+0000 osd.0 (osd.0) 153 : cluster [DBG] 3.12 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 153)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:18.036053+0000 osd.0 (osd.0) 152 : cluster [DBG] 3.12 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:18.046640+0000 osd.0 (osd.0) 153 : cluster [DBG] 3.12 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:49.455388+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:19.029726+0000 osd.0 (osd.0) 154 : cluster [DBG] 11.19 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:19.040280+0000 osd.0 (osd.0) 155 : cluster [DBG] 11.19 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 1089536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 155)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:19.029726+0000 osd.0 (osd.0) 154 : cluster [DBG] 11.19 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:19.040280+0000 osd.0 (osd.0) 155 : cluster [DBG] 11.19 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:50.455600+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:51.455749+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 1081344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.e scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.e scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:52.455889+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:22.030825+0000 osd.0 (osd.0) 156 : cluster [DBG] 8.e scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:22.044988+0000 osd.0 (osd.0) 157 : cluster [DBG] 8.e scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928583 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1056768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 157)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:22.030825+0000 osd.0 (osd.0) 156 : cluster [DBG] 8.e scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:22.044988+0000 osd.0 (osd.0) 157 : cluster [DBG] 8.e scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:53.456137+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1056768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:54.456276+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1056768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:55.456503+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1048576 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.e scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.e scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:56.456673+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:26.030361+0000 osd.0 (osd.0) 158 : cluster [DBG] 10.e scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:26.044417+0000 osd.0 (osd.0) 159 : cluster [DBG] 10.e scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.864217758s of 10.886577606s, submitted: 12
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 159)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:26.030361+0000 osd.0 (osd.0) 158 : cluster [DBG] 10.e scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:26.044417+0000 osd.0 (osd.0) 159 : cluster [DBG] 10.e scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:57.456941+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:26.987915+0000 osd.0 (osd.0) 160 : cluster [DBG] 8.f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:27.005612+0000 osd.0 (osd.0) 161 : cluster [DBG] 8.f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933407 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.d scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.d scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 161)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:26.987915+0000 osd.0 (osd.0) 160 : cluster [DBG] 8.f scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:27.005612+0000 osd.0 (osd.0) 161 : cluster [DBG] 8.f scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:58.457173+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:28.005040+0000 osd.0 (osd.0) 162 : cluster [DBG] 10.d scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:28.019167+0000 osd.0 (osd.0) 163 : cluster [DBG] 10.d scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 1024000 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 163)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:28.005040+0000 osd.0 (osd.0) 162 : cluster [DBG] 10.d scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:28.019167+0000 osd.0 (osd.0) 163 : cluster [DBG] 10.d scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:55:59.457341+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:00.457499+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 999424 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:01.457632+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 991232 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:02.457748+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:31.880794+0000 osd.0 (osd.0) 164 : cluster [DBG] 10.15 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:31.894908+0000 osd.0 (osd.0) 165 : cluster [DBG] 10.15 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938235 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 2039808 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 165)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:31.880794+0000 osd.0 (osd.0) 164 : cluster [DBG] 10.15 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:31.894908+0000 osd.0 (osd.0) 165 : cluster [DBG] 10.15 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:03.457964+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 2031616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:04.458188+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 2031616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:05.458425+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 2031616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:06.458585+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 2023424 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:07.458703+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938235 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 2023424 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:08.458824+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 2015232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:09.458942+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 2015232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.980209351s of 12.991925240s, submitted: 6
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:10.459072+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:39.979855+0000 osd.0 (osd.0) 166 : cluster [DBG] 10.9 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:39.994006+0000 osd.0 (osd.0) 167 : cluster [DBG] 10.9 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 2007040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 167)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:39.979855+0000 osd.0 (osd.0) 166 : cluster [DBG] 10.9 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:39.994006+0000 osd.0 (osd.0) 167 : cluster [DBG] 10.9 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:11.459325+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 2007040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:12.459464+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940648 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 2007040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:13.459613+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 1998848 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:14.459765+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 1998848 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:15.459956+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 1998848 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:16.460129+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:45.945705+0000 osd.0 (osd.0) 168 : cluster [DBG] 8.6 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:45.959837+0000 osd.0 (osd.0) 169 : cluster [DBG] 8.6 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 1966080 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 169)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:45.945705+0000 osd.0 (osd.0) 168 : cluster [DBG] 8.6 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:45.959837+0000 osd.0 (osd.0) 169 : cluster [DBG] 8.6 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:17.460361+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:46.906506+0000 osd.0 (osd.0) 170 : cluster [DBG] 6.5 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:46.924191+0000 osd.0 (osd.0) 171 : cluster [DBG] 6.5 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945470 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 1966080 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.a scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.a scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 171)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:46.906506+0000 osd.0 (osd.0) 170 : cluster [DBG] 6.5 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:46.924191+0000 osd.0 (osd.0) 171 : cluster [DBG] 6.5 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:18.460566+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:47.878275+0000 osd.0 (osd.0) 172 : cluster [DBG] 6.a scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:48.226809+0000 osd.0 (osd.0) 173 : cluster [DBG] 6.a scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 1957888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 173)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:47.878275+0000 osd.0 (osd.0) 172 : cluster [DBG] 6.a scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:48.226809+0000 osd.0 (osd.0) 173 : cluster [DBG] 6.a scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:19.460769+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:48.928802+0000 osd.0 (osd.0) 174 : cluster [DBG] 6.9 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:48.938818+0000 osd.0 (osd.0) 175 : cluster [DBG] 6.9 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 1957888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 175)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:48.928802+0000 osd.0 (osd.0) 174 : cluster [DBG] 6.9 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:48.938818+0000 osd.0 (osd.0) 175 : cluster [DBG] 6.9 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:20.460965+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:49.896592+0000 osd.0 (osd.0) 176 : cluster [DBG] 6.7 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:49.910704+0000 osd.0 (osd.0) 177 : cluster [DBG] 6.7 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 1941504 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.926292419s of 10.951813698s, submitted: 12
Feb 01 15:23:59 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14632 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 177)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:49.896592+0000 osd.0 (osd.0) 176 : cluster [DBG] 6.7 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:49.910704+0000 osd.0 (osd.0) 177 : cluster [DBG] 6.7 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:21.461157+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:50.931648+0000 osd.0 (osd.0) 178 : cluster [DBG] 6.3 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:50.949321+0000 osd.0 (osd.0) 179 : cluster [DBG] 6.3 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 1900544 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 179)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:50.931648+0000 osd.0 (osd.0) 178 : cluster [DBG] 6.3 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:50.949321+0000 osd.0 (osd.0) 179 : cluster [DBG] 6.3 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:22.461357+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955114 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 1900544 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:23.461510+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:52.933035+0000 osd.0 (osd.0) 180 : cluster [DBG] 6.0 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:52.957730+0000 osd.0 (osd.0) 181 : cluster [DBG] 6.0 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 181)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:52.933035+0000 osd.0 (osd.0) 180 : cluster [DBG] 6.0 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:52.957730+0000 osd.0 (osd.0) 181 : cluster [DBG] 6.0 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 1892352 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:24.461713+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 1892352 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:25.461932+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 1892352 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:26.462167+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 1875968 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:27.463023+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957525 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 1875968 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:28.463792+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:58.051269+0000 osd.0 (osd.0) 182 : cluster [DBG] 9.11 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:56:58.086626+0000 osd.0 (osd.0) 183 : cluster [DBG] 9.11 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 1867776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 183)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:58.051269+0000 osd.0 (osd.0) 182 : cluster [DBG] 9.11 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:56:58.086626+0000 osd.0 (osd.0) 183 : cluster [DBG] 9.11 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:29.464504+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 1859584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:30.464656+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:57:00.049508+0000 osd.0 (osd.0) 184 : cluster [DBG] 9.5 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:57:00.091772+0000 osd.0 (osd.0) 185 : cluster [DBG] 9.5 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 1859584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 185)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:57:00.049508+0000 osd.0 (osd.0) 184 : cluster [DBG] 9.5 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:57:00.091772+0000 osd.0 (osd.0) 185 : cluster [DBG] 9.5 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:31.464925+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 1851392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:32.465402+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962349 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 1851392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:33.465710+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 1851392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.231379509s of 13.245192528s, submitted: 8
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:34.465846+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:57:04.176907+0000 osd.0 (osd.0) 186 : cluster [DBG] 9.16 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:57:04.201656+0000 osd.0 (osd.0) 187 : cluster [DBG] 9.16 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 1843200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 187)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:57:04.176907+0000 osd.0 (osd.0) 186 : cluster [DBG] 9.16 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:57:04.201656+0000 osd.0 (osd.0) 187 : cluster [DBG] 9.16 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:35.466137+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 1843200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:36.466289+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 1818624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:37.466446+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964762 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 1818624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:38.466657+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 1810432 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:39.466808+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 1810432 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:40.466972+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 1810432 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:41.467159+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 1802240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:42.467338+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964762 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 1802240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:43.467463+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 1802240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:44.467601+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 1794048 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:45.467750+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 1794048 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.b scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.912002563s of 11.945251465s, submitted: 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.b scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:46.467899+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:57:16.122206+0000 osd.0 (osd.0) 188 : cluster [DBG] 9.b scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:57:16.146934+0000 osd.0 (osd.0) 189 : cluster [DBG] 9.b scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 1777664 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 189)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:57:16.122206+0000 osd.0 (osd.0) 188 : cluster [DBG] 9.b scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:57:16.146934+0000 osd.0 (osd.0) 189 : cluster [DBG] 9.b scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:47.468113+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:57:17.126519+0000 osd.0 (osd.0) 190 : cluster [DBG] 9.9 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:57:17.154714+0000 osd.0 (osd.0) 191 : cluster [DBG] 9.9 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969584 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 1769472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 191)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:57:17.126519+0000 osd.0 (osd.0) 190 : cluster [DBG] 9.9 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:57:17.154714+0000 osd.0 (osd.0) 191 : cluster [DBG] 9.9 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.d scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.d scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:48.468288+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:57:18.167903+0000 osd.0 (osd.0) 192 : cluster [DBG] 9.d scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:57:18.206723+0000 osd.0 (osd.0) 193 : cluster [DBG] 9.d scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 1753088 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 193)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:57:18.167903+0000 osd.0 (osd.0) 192 : cluster [DBG] 9.d scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:57:18.206723+0000 osd.0 (osd.0) 193 : cluster [DBG] 9.d scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:49.468471+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 1728512 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:50.468825+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 1728512 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:51.468953+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 1728512 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:52.469125+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971995 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 1720320 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:53.469251+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 1720320 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:54.469389+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 1720320 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:55.469556+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 1712128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:56.469682+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 1703936 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:57.469873+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971995 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 1703936 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:58.470033+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 1703936 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:56:59.470145+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 1695744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:00.470271+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 1695744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.005846977s of 15.016488075s, submitted: 6
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:01.470430+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:57:31.138706+0000 osd.0 (osd.0) 194 : cluster [DBG] 9.1 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:57:31.181059+0000 osd.0 (osd.0) 195 : cluster [DBG] 9.1 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 1695744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 195)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:57:31.138706+0000 osd.0 (osd.0) 194 : cluster [DBG] 9.1 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:57:31.181059+0000 osd.0 (osd.0) 195 : cluster [DBG] 9.1 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:02.470626+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974406 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1687552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:03.470752+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:57:33.079182+0000 osd.0 (osd.0) 196 : cluster [DBG] 9.3 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:57:33.118044+0000 osd.0 (osd.0) 197 : cluster [DBG] 9.3 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 1695744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 197)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:57:33.079182+0000 osd.0 (osd.0) 196 : cluster [DBG] 9.3 scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:57:33.118044+0000 osd.0 (osd.0) 197 : cluster [DBG] 9.3 scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:04.470992+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:57:34.080654+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.1d scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:57:34.108871+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.1d scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 1654784 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 199)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:57:34.080654+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.1d scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:57:34.108871+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.1d scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:05.471210+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:57:35.058667+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.1c scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:57:35.100894+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.1c scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 1654784 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 201)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:57:35.058667+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.1c scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:57:35.100894+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.1c scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:06.471441+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:57:36.011156+0000 osd.0 (osd.0) 202 : cluster [DBG] 9.1e scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:57:36.042861+0000 osd.0 (osd.0) 203 : cluster [DBG] 9.1e scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1638400 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 203)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:57:36.011156+0000 osd.0 (osd.0) 202 : cluster [DBG] 9.1e scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:57:36.042861+0000 osd.0 (osd.0) 203 : cluster [DBG] 9.1e scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:07.471638+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  log_queue is 2 last_log 205 sent 203 num 2 unsent 2 sending 2
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:57:37.057443+0000 osd.0 (osd.0) 204 : cluster [DBG] 9.1b scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  will send 2026-02-01T14:57:37.078627+0000 osd.0 (osd.0) 205 : cluster [DBG] 9.1b scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1638400 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client handle_log_ack log(last 205)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:57:37.057443+0000 osd.0 (osd.0) 204 : cluster [DBG] 9.1b scrub starts
Feb 01 15:23:59 compute-0 ceph-osd[85969]: log_client  logged 2026-02-01T14:57:37.078627+0000 osd.0 (osd.0) 205 : cluster [DBG] 9.1b scrub ok
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:08.480918+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1638400 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:09.481138+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:10.481319+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1630208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:11.481449+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 1613824 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:12.481627+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 1613824 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:13.481774+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1605632 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:14.481907+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1605632 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:15.482061+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1597440 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:16.482193+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1597440 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:17.482383+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1597440 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:18.482550+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1589248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:19.482706+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1589248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:20.482851+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1581056 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:21.482971+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1581056 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:22.483126+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1581056 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:23.483268+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1572864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:24.483414+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1581056 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:25.483558+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1581056 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:26.483691+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1572864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:27.483854+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1572864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:28.483988+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1564672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:29.484119+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1564672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:30.484265+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1564672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:31.484389+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1548288 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:32.484537+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1548288 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:33.484696+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1548288 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:34.484837+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1540096 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:35.485045+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1531904 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:36.485194+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1531904 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:37.485361+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1531904 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:38.485504+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1523712 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:39.485734+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1523712 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:40.486073+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1523712 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:41.486228+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 1515520 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:42.486369+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 1515520 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:43.486551+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1507328 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:44.486793+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1507328 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:45.487100+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 1507328 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:46.487205+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 1499136 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:47.487376+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 1499136 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:48.487492+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 1499136 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:49.487594+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1490944 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:50.487710+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1490944 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:51.487816+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1482752 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:52.487920+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1482752 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:53.488053+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 1474560 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:54.488175+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 1474560 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:55.488346+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 1474560 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:56.488488+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1466368 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:57.488592+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:58.488700+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1458176 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:57:59.488836+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:00.488961+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1449984 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:01.489113+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1441792 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:02.489310+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1441792 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:03.489498+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1441792 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:04.489655+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1433600 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:05.489824+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 1425408 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:06.489946+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1417216 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:07.490077+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1417216 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:08.490189+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1417216 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:09.490359+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 1400832 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:10.490516+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 1400832 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:11.490625+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 1400832 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:12.490816+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1392640 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:13.490930+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1392640 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:14.491080+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 1384448 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:15.491239+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 1359872 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:16.491350+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1351680 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:17.491471+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1351680 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:18.491577+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 1343488 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:19.491694+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 1327104 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:20.491807+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 1327104 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:21.491913+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 1327104 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:22.492024+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 1318912 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:23.492152+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 1318912 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:24.492333+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 1302528 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:25.492504+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 1302528 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:26.492614+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 1302528 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:27.492738+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 1294336 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:28.492804+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 1294336 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:29.492922+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 1286144 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:30.493032+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 1286144 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:31.493140+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 1286144 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:32.493314+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1277952 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:33.493498+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1277952 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:34.493643+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:35.493823+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:36.494031+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:37.494161+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:38.494349+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:39.494501+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1253376 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:40.494664+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:41.494791+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:42.495005+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1236992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:43.495226+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1236992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:44.495398+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1236992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:45.495601+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1228800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:46.495769+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1228800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:47.495963+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:48.496102+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1196032 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:49.496230+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:50.496397+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1196032 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:51.496566+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1196032 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:52.496703+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1196032 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:53.496820+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 1187840 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:54.497070+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:55.497270+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:56.497471+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:57.497678+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:58.497866+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 1171456 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:58:59.498030+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 1171456 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:00.498162+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1155072 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:01.498274+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1155072 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:02.498443+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:03.498593+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:04.498755+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:05.498906+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1138688 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:06.499038+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1138688 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:07.499226+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:08.499376+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:09.499516+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:10.499650+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1122304 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:11.499814+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1122304 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:12.499935+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1122304 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:13.500080+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:14.500219+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:15.500354+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1105920 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:16.500496+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1105920 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:17.500635+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1105920 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:18.500822+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1097728 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:19.500954+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1097728 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:20.501081+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1089536 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:21.501213+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1089536 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:22.501349+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 1081344 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:23.501502+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 1081344 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:24.501620+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 1081344 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:25.501791+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1073152 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:26.501902+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 1064960 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:27.502022+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 1064960 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:28.502148+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 1056768 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:29.502253+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 1056768 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:30.502403+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 1048576 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:31.502514+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 1048576 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:32.502665+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 1048576 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:33.502794+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 1040384 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:34.503025+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 1040384 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:35.503217+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 1040384 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:36.503372+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1032192 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:37.503483+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1032192 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:38.503621+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:39.503726+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:40.503854+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:41.504019+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1015808 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:42.504271+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:43.504506+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1015808 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:44.504711+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1015808 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:45.504900+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:46.505091+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 999424 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:47.505234+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:48.505380+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:49.505499+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:50.505666+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 974848 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:51.505777+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 974848 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:52.505952+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:53.506087+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:54.506204+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 958464 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:55.506360+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 958464 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:56.506465+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 942080 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:57.506583+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 942080 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:58.506753+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T14:59:59.506929+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 925696 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:00.507090+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 917504 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:01.507206+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 917504 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:02.507341+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 917504 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:03.507464+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 909312 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:04.507681+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 909312 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:05.507870+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 901120 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:06.508020+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 901120 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:07.508170+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 901120 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:08.508289+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 892928 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:09.508460+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 884736 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:10.508620+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 876544 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:11.508737+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 868352 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:12.508879+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 860160 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:13.509032+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 860160 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:14.509192+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 860160 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:15.509344+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 851968 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:16.509481+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 851968 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:17.509676+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:18.509814+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:19.509949+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:20.510099+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:21.510257+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:22.510440+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:23.510651+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:24.510828+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:25.510988+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:26.511216+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:27.511350+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:28.511529+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:29.511668+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:30.511812+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 5615 writes, 24K keys, 5615 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5615 writes, 888 syncs, 6.32 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5615 writes, 24K keys, 5615 commit groups, 1.0 writes per commit group, ingest: 18.67 MB, 0.03 MB/s
                                           Interval WAL: 5615 writes, 888 syncs, 6.32 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b61223a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b61223a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b61223a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:31.511953+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 729088 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:32.512105+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 720896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:33.512235+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 720896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:34.512407+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 720896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:35.512612+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 712704 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:36.512827+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 712704 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:37.512986+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 704512 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:38.513153+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 704512 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:39.513342+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:40.513480+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:41.513605+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 679936 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:42.513772+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 679936 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:43.513918+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:44.514091+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:45.514282+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:46.514433+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:47.514551+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:48.514674+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 647168 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:49.514824+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 647168 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:50.515004+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 647168 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:51.515141+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 638976 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:52.515272+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 638976 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:53.515425+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 638976 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:54.515611+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 630784 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:55.515815+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 630784 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:56.515976+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 622592 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:57.516097+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 622592 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:58.516250+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 614400 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:00:59.516407+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 614400 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:00.516577+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 614400 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:01.516709+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 614400 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:02.516853+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 606208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:03.517002+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 606208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:04.517121+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 606208 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:05.517284+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 598016 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:06.517351+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 598016 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:07.517467+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 589824 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:08.517626+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 589824 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:09.517752+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 581632 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:10.517888+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 581632 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:11.518012+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 581632 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:12.518135+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 573440 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:13.518251+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 573440 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:14.518411+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 565248 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:15.518616+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 557056 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:16.518791+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 557056 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:17.518920+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 548864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:18.519062+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 548864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:19.519206+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 548864 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:20.519517+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 540672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:21.519624+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 540672 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:22.519763+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 532480 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:23.519930+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 532480 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:24.520081+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 524288 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:25.520280+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 516096 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:26.520513+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 265.820220947s of 265.838745117s, submitted: 12
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 450560 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:27.520655+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 106496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:28.520845+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 933888 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:29.521027+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 933888 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:30.521336+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 933888 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:31.521520+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 917504 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:32.521693+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 917504 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:33.521851+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 917504 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:34.522038+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 909312 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:35.522265+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 909312 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:36.522418+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 892928 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:37.522588+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 892928 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:38.522848+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 892928 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:39.523019+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 892928 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:40.523155+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 884736 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:41.523288+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 884736 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:42.523457+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 876544 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:43.523585+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 876544 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:44.523743+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 860160 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:45.523890+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 860160 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:46.524043+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 851968 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:47.524840+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 851968 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:48.525465+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 827392 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:49.525959+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 827392 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:50.526101+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 819200 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:51.526245+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 819200 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:52.526639+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 811008 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:53.526827+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 802816 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:54.526967+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 794624 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:55.527646+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 778240 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:56.527758+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 770048 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:57.527916+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 770048 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:58.528114+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 770048 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:01:59.528330+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 753664 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:00.528525+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 753664 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:01.528682+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 745472 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:02.529014+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 745472 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:03.529267+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 745472 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:04.529381+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 745472 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:05.529584+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 737280 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:06.529750+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 737280 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:07.530001+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 729088 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:08.530180+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 729088 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:09.530351+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:10.530583+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:11.530789+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:12.530937+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:13.531059+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:14.531186+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:15.531353+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 679936 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:16.531493+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 679936 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:17.531620+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:18.531752+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:19.531885+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:20.532001+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:21.532116+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:22.532257+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 647168 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:23.532420+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 647168 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:24.532547+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 638976 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:25.532745+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 638976 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:26.532883+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:27.533030+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:28.533188+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:29.533349+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 622592 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:30.533491+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 622592 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:31.533620+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 614400 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:32.533730+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 614400 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:33.533865+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 614400 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:34.533995+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 606208 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:35.534177+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 589824 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:36.534354+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 589824 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:37.534535+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 589824 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:38.534732+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 589824 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:39.534873+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:40.535063+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:41.535244+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:42.535387+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:43.535498+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:44.535644+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:45.535800+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:46.535919+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:47.536066+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:48.536229+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:49.536449+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:50.536645+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:51.536808+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:52.537016+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:53.537191+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:54.537377+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:55.537638+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:56.537783+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:57.538007+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:58.538142+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:02:59.538260+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:00.538351+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:01.538533+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 548864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:02.538686+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:03.538864+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:04.539016+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:05.539189+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:06.539394+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:07.539548+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:08.539717+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:09.539849+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:10.539969+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:11.540118+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:12.540242+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:13.540439+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:14.540578+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:15.540765+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 532480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:16.540897+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:17.541044+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:18.541169+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:19.541285+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:20.541431+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:21.541574+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:22.541744+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:23.541861+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:24.541994+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 507904 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:25.542200+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 507904 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:26.542357+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 507904 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:27.542538+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 507904 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:28.542723+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 507904 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:29.542891+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 507904 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:30.543024+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 507904 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:31.543180+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 507904 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:32.543338+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 507904 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:33.543500+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 507904 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:34.543651+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 499712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:35.544653+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 491520 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:36.546125+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 475136 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:37.546330+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 466944 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:38.546508+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 466944 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:39.546650+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 466944 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:40.546788+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 466944 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:41.546957+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 466944 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:42.547119+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 466944 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:43.547261+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 466944 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:44.547482+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 450560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:45.547730+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 450560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:46.547910+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 450560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:47.548133+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 450560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:48.548375+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 434176 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:49.548572+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:50.548805+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:51.549042+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:52.549246+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:53.549487+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:54.549674+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:55.549895+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:56.550106+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:57.550394+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:58.550670+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:03:59.550945+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:00.551166+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:01.551410+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:02.552274+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:03.552488+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:04.552713+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:05.552958+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:06.553142+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:07.553358+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 417792 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:08.553565+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 417792 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:09.553724+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 401408 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:10.553924+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 401408 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:11.554062+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 401408 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:12.554179+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 401408 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:13.554355+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 401408 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:14.554499+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 401408 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:15.554668+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 401408 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:16.554782+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:17.554885+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:18.555027+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:19.555179+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:20.555335+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:21.555465+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:22.555582+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:23.555707+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:24.555830+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:25.555999+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:26.556131+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:27.556250+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:28.556398+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:29.556635+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:30.556911+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:31.557054+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:32.557225+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:33.557355+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:34.557485+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:35.557638+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:36.557805+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:37.557985+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:38.558175+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:39.558413+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:40.558571+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:41.558716+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:42.558860+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:43.559569+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:44.559729+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:45.559917+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:46.560072+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:47.560200+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 385024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:48.560373+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 368640 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:49.560535+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 344064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:50.560758+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 344064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:51.560954+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 344064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:52.561116+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 344064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:53.561309+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 344064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:54.561479+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 335872 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:55.561810+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 335872 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:56.561975+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 311296 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:57.562172+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 311296 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:58.562554+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 311296 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:04:59.562719+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 311296 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:00.563122+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 311296 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:01.563356+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 311296 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:02.563539+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 311296 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:03.563690+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 311296 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:04.563828+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 311296 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:05.564056+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 311296 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:06.564196+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 311296 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:07.564369+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 311296 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:08.564483+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 311296 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:09.564710+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 303104 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:10.564867+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 303104 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:11.565061+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:12.565206+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:13.565355+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:14.565536+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:15.565683+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76242944 unmapped: 294912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:16.565853+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:17.566021+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:18.566173+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:19.566416+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:20.566590+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:21.566805+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:22.566999+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:23.567214+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:24.567454+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:25.567676+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:26.567843+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:27.568046+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:28.568231+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:29.568347+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:30.568467+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:31.568597+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:32.568708+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:33.568829+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:34.568933+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:35.569116+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: mgrc ms_handle_reset ms_handle_reset con 0x563b62ede000
Feb 01 15:23:59 compute-0 ceph-osd[85969]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3695062931
Feb 01 15:23:59 compute-0 ceph-osd[85969]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: get_auth_request con 0x563b62c1c800 auth_method 0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: mgrc handle_mgr_configure stats_period=5
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 835584 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:36.569266+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 835584 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:37.569418+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 835584 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:38.569644+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 835584 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:39.569826+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 835584 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:40.569970+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 ms_handle_reset con 0x563b62edf000 session 0x563b62c4e8c0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: handle_auth_request added challenge on 0x563b62bf2c00
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 1007616 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:41.570085+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 1007616 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:42.570241+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 1007616 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:43.570376+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 1007616 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:44.570516+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 1007616 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:45.570657+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 1146880 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:46.570759+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 1146880 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:47.570869+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 1146880 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:48.570970+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 1146880 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:49.571072+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76447744 unmapped: 1138688 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:50.571176+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 1114112 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:51.571350+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 1114112 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:52.571513+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 1114112 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:53.571665+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76472320 unmapped: 1114112 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:54.571784+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1105920 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:55.571954+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1105920 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:56.572099+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1105920 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:57.572214+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1105920 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:58.572370+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1105920 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:05:59.572512+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1105920 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:00.572646+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1089536 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:01.572791+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1089536 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:02.572948+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1089536 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:03.573086+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1089536 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:04.573217+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1081344 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:05.573342+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1081344 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:06.573486+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1081344 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:07.573614+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1081344 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:08.573749+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1081344 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:09.573869+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1081344 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:10.574066+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1081344 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:11.574200+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1081344 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:12.574344+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1081344 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:13.574499+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1081344 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:14.574601+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1081344 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:15.574763+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1081344 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:16.574924+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1081344 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:17.575083+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1081344 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:18.575198+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1081344 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:19.575342+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1081344 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:20.575509+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1064960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:21.575690+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1064960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:22.575815+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1064960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:23.575936+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1064960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:24.576068+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986469 data_alloc: 218103808 data_used: 5051
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1064960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:25.576431+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 1056768 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:26.576567+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: handle_auth_request added challenge on 0x563b62dc8800
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 299.885528564s of 300.142120361s, submitted: 106
Feb 01 15:23:59 compute-0 ceph-osd[85969]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 1171456 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:27.576676+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 1171456 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:28.576830+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 1171456 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:29.576998+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 1171456 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:30.577139+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 1171456 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:31.577274+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 1171456 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:32.577407+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 1171456 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:33.577509+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 1171456 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:34.577632+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 1163264 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:35.577812+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 1163264 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:36.577923+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 1163264 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:37.578030+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 1163264 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:38.578210+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 1163264 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:39.578364+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 1163264 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:40.578534+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 1146880 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:41.578668+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 1146880 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:42.578828+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 1146880 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:43.578987+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 1146880 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:44.579159+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 1146880 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:45.579381+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 1146880 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:46.579593+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 1146880 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:47.579753+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 1146880 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:48.579855+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 1130496 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:49.579946+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1122304 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:50.580082+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1122304 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:51.580219+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1122304 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:52.580390+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1122304 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:53.580594+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1122304 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:54.580763+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1122304 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:55.580930+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1122304 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:56.581108+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1122304 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:57.581280+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1122304 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:58.581455+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1122304 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:06:59.581596+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1122304 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:00.581738+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1105920 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:01.581909+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1105920 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:02.582103+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1105920 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:03.582269+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1105920 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:04.582519+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1105920 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:05.582719+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1105920 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:06.582860+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1105920 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:07.582982+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1105920 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:08.583156+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1105920 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:09.583283+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1105920 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:10.583484+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1097728 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:11.583634+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1097728 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:12.583774+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1097728 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:13.583953+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1097728 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:14.584118+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1097728 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:15.584266+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1097728 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:16.584423+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1097728 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:17.584568+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1097728 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:18.584769+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1097728 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:19.584944+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1097728 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:20.585109+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1081344 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:21.585260+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1081344 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:22.585453+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1081344 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:23.585583+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 1081344 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:24.585745+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1089536 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:25.585903+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1089536 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:26.586105+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1089536 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:27.586275+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1089536 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:28.586491+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1089536 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:29.586636+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1089536 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:30.586778+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1089536 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:31.587051+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1089536 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:32.587245+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1089536 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:33.587453+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1089536 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:34.587616+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1089536 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:35.587804+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1089536 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:36.587982+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1089536 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:37.588162+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1089536 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:38.588336+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1089536 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:39.588444+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1089536 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:40.588556+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:41.588676+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 1073152 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:42.588870+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 1073152 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:43.589007+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 1073152 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:44.589160+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 1073152 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:45.589353+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 1073152 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:46.589506+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 1073152 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:47.589655+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 1073152 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:48.589767+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 1073152 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:49.589892+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 1073152 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:50.590056+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 1073152 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:51.590192+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1064960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:52.591589+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1064960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:53.591736+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1064960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:54.595744+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1064960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:55.595929+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1064960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:56.596087+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1064960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:57.596225+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1064960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:58.596399+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1064960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:07:59.596546+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1064960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:00.596764+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 1064960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:01.596905+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 1048576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:02.597078+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 1048576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:03.597224+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 1048576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:04.597431+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 1048576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:05.597584+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 1048576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:06.597773+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 1048576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:07.597950+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 1048576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:08.598147+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 1048576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:09.598281+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 1048576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:10.598457+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 1048576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:11.598612+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 1048576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:12.598776+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 1048576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:13.598922+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 1048576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:14.599098+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 1048576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:15.599279+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 1048576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:16.599463+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 1048576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:17.599627+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 1048576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:18.599866+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 1048576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:19.599994+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 1048576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:20.600158+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 1048576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:21.600369+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 1032192 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:22.600567+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 1032192 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:23.600786+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 1032192 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:24.600905+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 1024000 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:25.601051+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 1024000 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:26.601204+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 1024000 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:27.601376+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 1024000 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:28.601517+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 1024000 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:29.601670+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 1024000 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:30.601795+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 1024000 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:31.601946+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 1024000 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:32.602077+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 1024000 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:33.602232+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 1024000 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:34.602352+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 1024000 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:35.602542+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 1024000 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:36.602742+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 1024000 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:37.602930+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 1024000 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:38.603090+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 1024000 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:39.603263+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 1024000 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:40.603396+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 1024000 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:41.603581+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 1007616 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:42.603772+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 1007616 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:43.603996+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 999424 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:44.604222+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 999424 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:45.604386+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 999424 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:46.604509+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 999424 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:47.604660+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 999424 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:48.604800+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:49.604932+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:50.605076+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:51.605216+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:52.605375+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:53.605500+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:54.605634+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:55.605815+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:56.605927+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:57.606039+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:58.606205+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:08:59.606331+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:00.606449+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:01.606567+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:02.606683+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:03.606833+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:04.606934+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:05.607084+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:06.607231+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:07.607390+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:08.607533+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:09.607722+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:10.607835+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:11.607979+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:12.608141+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:13.608270+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:14.608412+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:15.608602+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:16.608750+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:17.608926+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:18.609072+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:19.609243+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:20.609365+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:21.609524+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 950272 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:22.609723+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 950272 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:23.609908+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 950272 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:24.610088+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 950272 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:25.610288+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 950272 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:26.610482+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 950272 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:27.610641+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 950272 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:28.610772+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 950272 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:29.610923+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 950272 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:30.611056+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 950272 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:31.611193+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 950272 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:32.611318+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 950272 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:33.611441+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 950272 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:34.611570+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 950272 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:35.611765+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 950272 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:36.611923+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 950272 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:37.612068+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 950272 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:38.612198+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 950272 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:39.612291+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 950272 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:40.612437+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 950272 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:41.612546+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 933888 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:42.612678+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 933888 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:43.612819+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 925696 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:44.612970+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 925696 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:45.613152+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 925696 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:46.613354+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 991232 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:47.613491+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 991232 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:48.613616+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:49.613735+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:50.613877+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:51.614010+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:52.614361+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:53.614588+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:54.614685+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:55.615142+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:56.615314+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:57.615923+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:58.616462+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:09:59.616767+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:00.617140+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:01.617390+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:02.617553+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:03.617742+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:04.617945+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:05.618184+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:06.618428+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:07.618612+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:08.618761+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:09.618891+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:10.619045+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:11.619210+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:12.619389+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:13.619574+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:14.619703+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 983040 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:15.619859+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 974848 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:16.620047+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 974848 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:17.620236+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 974848 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:18.620377+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 974848 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:19.620655+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 974848 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:20.620837+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 974848 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:21.621094+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 974848 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:22.621322+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 974848 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:23.621503+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 974848 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:24.621744+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 974848 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:25.621968+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 974848 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:26.622131+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 974848 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:27.622374+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 974848 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:28.622594+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 974848 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:29.622760+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 974848 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:30.622934+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 5863 writes, 24K keys, 5863 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5863 writes, 1012 syncs, 5.79 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b61223a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b61223a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b61223a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563b612238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:31.623135+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:32.623347+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:33.623469+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:34.623634+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:35.623827+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:36.624004+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:37.624178+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:38.624363+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:39.624537+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:40.624716+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:41.624855+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:42.625031+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:43.625196+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:44.625355+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:45.625552+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:46.625686+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:47.625807+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:48.625978+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:49.626116+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:50.626237+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:51.626399+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:52.626539+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:53.626648+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:54.626769+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:55.626937+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:56.627057+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:57.627219+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:58.627372+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:10:59.627536+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:00.627672+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:01.627799+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:02.627967+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:03.628171+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:04.628348+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:05.628569+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:06.628748+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 966656 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:07.628929+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 950272 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:08.629116+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 950272 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:09.629253+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 950272 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:10.629415+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:11.629580+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:12.629745+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:13.629875+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:14.630048+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:15.630254+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:16.630378+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:17.630528+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:18.630703+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:19.630829+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 942080 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:20.630943+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 925696 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:21.631070+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 925696 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:22.631210+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 925696 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:23.631349+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 925696 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:24.631533+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 925696 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:25.631730+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 925696 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:26.631915+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 299.919250488s of 299.941284180s, submitted: 18
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 892928 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:27.632065+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 1843200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:28.632202+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:29.632377+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:30.632540+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:31.632708+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:32.632872+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:33.633023+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:34.633323+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:35.633553+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:36.633794+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:37.634046+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:38.634203+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:39.634391+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:40.634678+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:41.634887+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:42.635084+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:43.635254+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:44.635425+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:45.635672+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:46.635861+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:47.636096+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:48.636261+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:49.636377+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:50.636537+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:51.636737+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:52.636930+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:53.637056+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:54.637221+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:55.637386+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:56.637518+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:57.637627+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:58.637742+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 712704 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:11:59.637850+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 704512 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:00.637972+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 704512 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:01.638106+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:02.638239+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:03.638393+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:04.638543+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:05.638781+0000)
Feb 01 15:23:59 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:23:59 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:23:59 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:23:59 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:06.638930+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:07.639055+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:08.639188+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:09.639370+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:10.639535+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 720896 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: handle_auth_request added challenge on 0x563b62bf3c00
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986853 data_alloc: 218103808 data_used: 5511
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb8019/0x174000, compress 0x0/0x0/0x0, omap 0x14c7f, meta 0x2bbb381), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:11.639709+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 581632 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 120 handle_osd_map epochs [122,122], i have 120, src has [1,122]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 120 handle_osd_map epochs [121,122], i have 120, src has [1,122]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 45.179214478s of 45.445621490s, submitted: 106
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:12.639875+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 17244160 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc6b0000/0x0/0x4ffc00000, data 0x8bb7a5/0x97a000, compress 0x0/0x0/0x0, omap 0x150ad, meta 0x2bbaf53), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 123 ms_handle_reset con 0x563b62bf3c00 session 0x563b65616000
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:13.640034+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 17358848 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: handle_auth_request added challenge on 0x563b62bf2800
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:14.640156+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 17162240 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fc23f000/0x0/0x4ffc00000, data 0xd2d35d/0xded000, compress 0x0/0x0/0x0, omap 0x154d8, meta 0x2bbab28), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 124 ms_handle_reset con 0x563b62bf2800 session 0x563b63ac9a40
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:15.640330+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 17162240 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1065474 data_alloc: 218103808 data_used: 5511
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:16.640526+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 17162240 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:17.640653+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 17162240 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:18.640776+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 17162240 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fc23a000/0x0/0x4ffc00000, data 0xd2ef15/0xdf0000, compress 0x0/0x0/0x0, omap 0x15763, meta 0x2bba89d), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fc23a000/0x0/0x4ffc00000, data 0xd2ef15/0xdf0000, compress 0x0/0x0/0x0, omap 0x15763, meta 0x2bba89d), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:19.640880+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 17162240 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:20.641005+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 17162240 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068248 data_alloc: 218103808 data_used: 5511
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:21.641152+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 17162240 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:22.641290+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 17162240 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:23.641437+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 17162240 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc237000/0x0/0x4ffc00000, data 0xd30994/0xdf3000, compress 0x0/0x0/0x0, omap 0x159e0, meta 0x2bba620), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc237000/0x0/0x4ffc00000, data 0xd30994/0xdf3000, compress 0x0/0x0/0x0, omap 0x159e0, meta 0x2bba620), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:24.641543+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 17162240 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:25.641703+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 17162240 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068248 data_alloc: 218103808 data_used: 5511
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:26.641838+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 17162240 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc237000/0x0/0x4ffc00000, data 0xd30994/0xdf3000, compress 0x0/0x0/0x0, omap 0x159e0, meta 0x2bba620), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:27.641996+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 17162240 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc237000/0x0/0x4ffc00000, data 0xd30994/0xdf3000, compress 0x0/0x0/0x0, omap 0x159e0, meta 0x2bba620), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:28.642134+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 17162240 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:29.642346+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 17162240 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:30.642486+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 17162240 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068248 data_alloc: 218103808 data_used: 5511
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc237000/0x0/0x4ffc00000, data 0xd30994/0xdf3000, compress 0x0/0x0/0x0, omap 0x159e0, meta 0x2bba620), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:31.642679+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 17154048 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc237000/0x0/0x4ffc00000, data 0xd30994/0xdf3000, compress 0x0/0x0/0x0, omap 0x159e0, meta 0x2bba620), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:32.642854+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 17154048 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:33.642998+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 17154048 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:34.643134+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 17154048 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:35.643312+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 17154048 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068248 data_alloc: 218103808 data_used: 5511
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:36.643462+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 17154048 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc237000/0x0/0x4ffc00000, data 0xd30994/0xdf3000, compress 0x0/0x0/0x0, omap 0x159e0, meta 0x2bba620), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:37.643579+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 17154048 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:38.643719+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 17154048 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc handle_mgr_map Got map version 10
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc237000/0x0/0x4ffc00000, data 0xd30994/0xdf3000, compress 0x0/0x0/0x0, omap 0x159e0, meta 0x2bba620), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:39.643914+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 17211392 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc237000/0x0/0x4ffc00000, data 0xd30994/0xdf3000, compress 0x0/0x0/0x0, omap 0x159e0, meta 0x2bba620), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:40.644046+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 17211392 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068248 data_alloc: 218103808 data_used: 5511
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:41.644246+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc237000/0x0/0x4ffc00000, data 0xd30994/0xdf3000, compress 0x0/0x0/0x0, omap 0x159e0, meta 0x2bba620), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 17211392 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:42.644371+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 17211392 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc237000/0x0/0x4ffc00000, data 0xd30994/0xdf3000, compress 0x0/0x0/0x0, omap 0x159e0, meta 0x2bba620), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:43.644497+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 17211392 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:44.644623+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 17211392 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc237000/0x0/0x4ffc00000, data 0xd30994/0xdf3000, compress 0x0/0x0/0x0, omap 0x159e0, meta 0x2bba620), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:45.644811+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 17211392 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068248 data_alloc: 218103808 data_used: 5511
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:46.644935+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 17211392 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc handle_mgr_map Got map version 11
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.540393829s of 34.747665405s, submitted: 34
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:47.645110+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 17129472 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:48.645234+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 17129472 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fc239000/0x0/0x4ffc00000, data 0xd30994/0xdf3000, compress 0x0/0x0/0x0, omap 0x159e0, meta 0x2bba620), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:49.645371+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 17129472 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:50.645514+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 17129472 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067720 data_alloc: 218103808 data_used: 5511
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:51.645644+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 17129472 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:52.645795+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 17129472 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:53.645938+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 17129472 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:54.646069+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 17129472 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc239000/0x0/0x4ffc00000, data 0xd30994/0xdf3000, compress 0x0/0x0/0x0, omap 0x159e0, meta 0x2bba620), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:55.646214+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 17129472 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071182 data_alloc: 218103808 data_used: 5511
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:56.646337+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 17129472 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.962168694s of 10.011589050s, submitted: 27
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:57.646478+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 17129472 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:58.646605+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 17129472 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:12:59.646744+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 17129472 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc236000/0x0/0x4ffc00000, data 0xd32599/0xdf6000, compress 0x0/0x0/0x0, omap 0x15c6f, meta 0x2bba391), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:00.646949+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 17129472 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1070478 data_alloc: 218103808 data_used: 5511
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:01.647167+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 17129472 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:02.647320+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 17129472 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:03.647461+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 17129472 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:04.647609+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 17121280 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:05.647784+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 17121280 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073956 data_alloc: 218103808 data_used: 5511
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc231000/0x0/0x4ffc00000, data 0xd34018/0xdf9000, compress 0x0/0x0/0x0, omap 0x15ef0, meta 0x2bba110), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:06.647912+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc231000/0x0/0x4ffc00000, data 0xd34018/0xdf9000, compress 0x0/0x0/0x0, omap 0x15ef0, meta 0x2bba110), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 17121280 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:07.648060+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 17121280 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:08.648199+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 17121280 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:09.648414+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 17121280 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:10.648551+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 17121280 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.001125336s of 14.017148018s, submitted: 15
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073972 data_alloc: 218103808 data_used: 5511
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc231000/0x0/0x4ffc00000, data 0xd34018/0xdf9000, compress 0x0/0x0/0x0, omap 0x15ef0, meta 0x2bba110), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:11.648695+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 17113088 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:12.648854+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 17113088 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:13.648997+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 17113088 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:14.649121+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 17113088 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:15.649265+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fc231000/0x0/0x4ffc00000, data 0xd34018/0xdf9000, compress 0x0/0x0/0x0, omap 0x15ef0, meta 0x2bba110), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 17113088 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073236 data_alloc: 218103808 data_used: 5511
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:16.649408+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 17113088 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:17.649529+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 17113088 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:18.649732+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 17113088 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:19.649859+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 17104896 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:20.649962+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fc22e000/0x0/0x4ffc00000, data 0xd35c1d/0xdfc000, compress 0x0/0x0/0x0, omap 0x16183, meta 0x2bb9e7d), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 17104896 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.911887169s of 10.000762939s, submitted: 27
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076010 data_alloc: 218103808 data_used: 5511
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:21.650111+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 17104896 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fc230000/0x0/0x4ffc00000, data 0xd35c1d/0xdfc000, compress 0x0/0x0/0x0, omap 0x16183, meta 0x2bb9e7d), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:22.650260+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 17104896 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:23.650464+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 17104896 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:24.650574+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: handle_auth_request added challenge on 0x563b65b39c00
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16973824 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:25.650694+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16973824 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081036 data_alloc: 218103808 data_used: 5511
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:26.650787+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16973824 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fc22a000/0x0/0x4ffc00000, data 0xd37737/0xe00000, compress 0x0/0x0/0x0, omap 0x16408, meta 0x2bb9bf8), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:27.650893+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fc22a000/0x0/0x4ffc00000, data 0xd37737/0xe00000, compress 0x0/0x0/0x0, omap 0x16408, meta 0x2bb9bf8), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16973824 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:28.651103+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fc225000/0x0/0x4ffc00000, data 0xd3936c/0xe03000, compress 0x0/0x0/0x0, omap 0x165ed, meta 0x2bb9a13), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16973824 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fc225000/0x0/0x4ffc00000, data 0xd3936c/0xe03000, compress 0x0/0x0/0x0, omap 0x165ed, meta 0x2bb9a13), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:29.651243+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 16957440 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:30.651404+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 16957440 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082516 data_alloc: 218103808 data_used: 5511
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:31.651550+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 16957440 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:32.651714+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fc22a000/0x0/0x4ffc00000, data 0xd392d1/0xe02000, compress 0x0/0x0/0x0, omap 0x165ed, meta 0x2bb9a13), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 16957440 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.926730156s of 12.015572548s, submitted: 36
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:33.651903+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 16957440 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:34.652017+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 16949248 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:35.652204+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 16949248 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 130 handle_osd_map epochs [130,131], i have 131, src has [1,131]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085850 data_alloc: 218103808 data_used: 5511
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fc22a000/0x0/0x4ffc00000, data 0xd392d1/0xe02000, compress 0x0/0x0/0x0, omap 0x165ed, meta 0x2bb9a13), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:36.652351+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 16941056 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:37.652552+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 16941056 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:38.652800+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 16941056 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:39.652970+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 15876096 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:40.653176+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 15876096 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089596 data_alloc: 218103808 data_used: 5511
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:41.653399+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 15876096 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fc223000/0x0/0x4ffc00000, data 0xd3ca10/0xe09000, compress 0x0/0x0/0x0, omap 0x1716f, meta 0x2bb8e91), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:42.653656+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 15876096 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:43.653833+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79544320 unmapped: 15876096 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 132 handle_osd_map epochs [132,133], i have 133, src has [1,133]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.994749069s of 11.063447952s, submitted: 43
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:44.653958+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 15867904 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:45.654160+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 15843328 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098992 data_alloc: 218103808 data_used: 5511
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:46.654385+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 15843328 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:47.654540+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 15843328 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc214000/0x0/0x4ffc00000, data 0xd41d05/0xe12000, compress 0x0/0x0/0x0, omap 0x17929, meta 0x2bb86d7), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:48.654684+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 15835136 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:49.654836+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 15835136 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:50.654991+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 135 handle_osd_map epochs [136,137], i have 135, src has [1,137]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 15835136 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105432 data_alloc: 218103808 data_used: 5511
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:51.655201+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 15835136 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc212000/0x0/0x4ffc00000, data 0xd453e5/0xe18000, compress 0x0/0x0/0x0, omap 0x17bab, meta 0x2bb8455), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:52.655372+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 15835136 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:53.655557+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 15802368 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.997282982s of 10.192903519s, submitted: 95
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:54.655722+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 15761408 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:55.655923+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 15728640 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110726 data_alloc: 218103808 data_used: 5783
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:56.656175+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 15712256 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:57.656338+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 15712256 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fc20a000/0x0/0x4ffc00000, data 0xd48b05/0xe1e000, compress 0x0/0x0/0x0, omap 0x180b5, meta 0x2bb7f4b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:58.656510+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 15712256 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:13:59.656646+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fc20a000/0x0/0x4ffc00000, data 0xd48b05/0xe1e000, compress 0x0/0x0/0x0, omap 0x180b5, meta 0x2bb7f4b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 15704064 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:00.656989+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fc20a000/0x0/0x4ffc00000, data 0xd48b05/0xe1e000, compress 0x0/0x0/0x0, omap 0x180b5, meta 0x2bb7f4b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 15704064 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112418 data_alloc: 218103808 data_used: 5783
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:01.657152+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 15671296 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:02.657430+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 15671296 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:03.657584+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc20b000/0x0/0x4ffc00000, data 0xd4a5d0/0xe21000, compress 0x0/0x0/0x0, omap 0x185ec, meta 0x2bb7a14), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 15663104 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:04.657728+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fc20c000/0x0/0x4ffc00000, data 0xd4a535/0xe20000, compress 0x0/0x0/0x0, omap 0x185ec, meta 0x2bb7a14), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 79765504 unmapped: 15654912 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:05.657933+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.514864922s of 11.586832047s, submitted: 54
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 14589952 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc20c000/0x0/0x4ffc00000, data 0xd4a535/0xe20000, compress 0x0/0x0/0x0, omap 0x185ec, meta 0x2bb7a14), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116720 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:06.658054+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 14589952 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:07.658170+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc207000/0x0/0x4ffc00000, data 0xd4bfd0/0xe23000, compress 0x0/0x0/0x0, omap 0x1892b, meta 0x2bb76d5), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 14589952 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:08.658363+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 14589952 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:09.658485+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 14589952 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:10.658625+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 14589952 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116720 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:11.658806+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 14589952 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc207000/0x0/0x4ffc00000, data 0xd4bfd0/0xe23000, compress 0x0/0x0/0x0, omap 0x1892b, meta 0x2bb76d5), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:12.658934+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 14589952 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:13.659099+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80830464 unmapped: 14589952 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:14.659272+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 14573568 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:15.659594+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 141 handle_osd_map epochs [141,142], i have 142, src has [1,142]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.991394997s of 10.002235413s, submitted: 12
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 14573568 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121042 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:16.659757+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 14573568 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc203000/0x0/0x4ffc00000, data 0xd4daea/0xe27000, compress 0x0/0x0/0x0, omap 0x18c68, meta 0x2bb7398), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:17.659957+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 14573568 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:18.660181+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 14573568 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:19.660321+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc202000/0x0/0x4ffc00000, data 0xd4db85/0xe28000, compress 0x0/0x0/0x0, omap 0x18c68, meta 0x2bb7398), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 14573568 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:20.660424+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 14548992 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123116 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:21.660568+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 14548992 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:22.660676+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc203000/0x0/0x4ffc00000, data 0xd4dc20/0xe29000, compress 0x0/0x0/0x0, omap 0x18c68, meta 0x2bb7398), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80896000 unmapped: 14524416 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:23.660784+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 14508032 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:24.660930+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc204000/0x0/0x4ffc00000, data 0xd4db85/0xe28000, compress 0x0/0x0/0x0, omap 0x18c68, meta 0x2bb7398), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 14508032 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:25.661170+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc204000/0x0/0x4ffc00000, data 0xd4db85/0xe28000, compress 0x0/0x0/0x0, omap 0x18c68, meta 0x2bb7398), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 14508032 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124664 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:26.661288+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 14508032 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.157759666s of 11.184099197s, submitted: 18
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:27.661490+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 14467072 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:28.661659+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 14458880 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:29.661811+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 142 handle_osd_map epochs [142,143], i have 143, src has [1,143]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 14458880 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:30.661949+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 14458880 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc1ff000/0x0/0x4ffc00000, data 0xd4f78a/0xe2b000, compress 0x0/0x0/0x0, omap 0x18f15, meta 0x2bb70eb), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129116 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:31.662105+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 14458880 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:32.662273+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81018880 unmapped: 14401536 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:33.662436+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 14393344 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:34.662595+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc201000/0x0/0x4ffc00000, data 0xd4f78a/0xe2b000, compress 0x0/0x0/0x0, omap 0x18f15, meta 0x2bb70eb), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 14393344 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:35.662763+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 14376960 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130566 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:36.662917+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 14376960 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:37.663068+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 14376960 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:38.663244+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 14376960 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.865568161s of 12.007027626s, submitted: 63
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc1fd000/0x0/0x4ffc00000, data 0xd5116e/0xe2d000, compress 0x0/0x0/0x0, omap 0x19222, meta 0x2bb6dde), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:39.663389+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 14352384 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:40.663524+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 14344192 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:41.663744+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130948 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 14344192 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc1ff000/0x0/0x4ffc00000, data 0xd5116e/0xe2d000, compress 0x0/0x0/0x0, omap 0x19222, meta 0x2bb6dde), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:42.664036+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc1ff000/0x0/0x4ffc00000, data 0xd5116e/0xe2d000, compress 0x0/0x0/0x0, omap 0x19222, meta 0x2bb6dde), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 14344192 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:43.664206+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 14344192 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:44.664693+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc1ff000/0x0/0x4ffc00000, data 0xd5116e/0xe2d000, compress 0x0/0x0/0x0, omap 0x19222, meta 0x2bb6dde), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 14336000 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:45.664915+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 14336000 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:46.665124+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130804 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 14336000 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc1ff000/0x0/0x4ffc00000, data 0xd5116e/0xe2d000, compress 0x0/0x0/0x0, omap 0x19222, meta 0x2bb6dde), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:47.665391+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 14336000 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:48.665836+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.995323181s of 10.002388954s, submitted: 5
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 14336000 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:49.666023+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 14336000 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:50.666384+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 14336000 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:51.666612+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130214 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 14336000 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:52.666758+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc200000/0x0/0x4ffc00000, data 0xd510d3/0xe2c000, compress 0x0/0x0/0x0, omap 0x19222, meta 0x2bb6dde), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 14336000 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:53.667016+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 14336000 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:54.667259+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc200000/0x0/0x4ffc00000, data 0xd510d3/0xe2c000, compress 0x0/0x0/0x0, omap 0x19222, meta 0x2bb6dde), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 14336000 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:55.667497+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 14327808 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:56.667667+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133724 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 14327808 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fc1fb000/0x0/0x4ffc00000, data 0xd52cd8/0xe2f000, compress 0x0/0x0/0x0, omap 0x194d3, meta 0x2bb6b2d), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:57.667856+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 14327808 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:58.668365+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 14327808 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:14:59.668566+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 14327808 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:00.668925+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 14327808 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:01.669272+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133724 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fc1fb000/0x0/0x4ffc00000, data 0xd52cd8/0xe2f000, compress 0x0/0x0/0x0, omap 0x194d3, meta 0x2bb6b2d), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 14327808 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:02.669594+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fc1fb000/0x0/0x4ffc00000, data 0xd52cd8/0xe2f000, compress 0x0/0x0/0x0, omap 0x194d3, meta 0x2bb6b2d), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.923968315s of 14.008096695s, submitted: 26
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 14327808 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:03.669911+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 14327808 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:04.670236+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fc1fd000/0x0/0x4ffc00000, data 0xd52cd8/0xe2f000, compress 0x0/0x0/0x0, omap 0x194d3, meta 0x2bb6b2d), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 14327808 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:05.670460+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 14327808 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:06.670648+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136338 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 14327808 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:07.670927+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 14327808 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:08.671205+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc1f8000/0x0/0x4ffc00000, data 0xd54757/0xe32000, compress 0x0/0x0/0x0, omap 0x1974d, meta 0x2bb68b3), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 14327808 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc1f8000/0x0/0x4ffc00000, data 0xd54757/0xe32000, compress 0x0/0x0/0x0, omap 0x1974d, meta 0x2bb68b3), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:09.671377+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 14327808 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:10.671569+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 14327808 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:11.671765+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138030 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 14327808 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:12.671881+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 14319616 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:13.672049+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 14319616 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:14.672188+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc1f8000/0x0/0x4ffc00000, data 0xd5484f/0xe34000, compress 0x0/0x0/0x0, omap 0x1974d, meta 0x2bb68b3), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 14319616 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:15.672485+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 14319616 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:16.672608+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139002 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 14319616 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:17.672720+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc1f8000/0x0/0x4ffc00000, data 0xd5484f/0xe34000, compress 0x0/0x0/0x0, omap 0x1974d, meta 0x2bb68b3), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 14319616 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:18.672880+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 14319616 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:19.673045+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 14319616 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:20.673238+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.640260696s of 17.663045883s, submitted: 15
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 14303232 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:21.673455+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140422 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc1f6000/0x0/0x4ffc00000, data 0xd54918/0xe35000, compress 0x0/0x0/0x0, omap 0x1974d, meta 0x2bb68b3), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 14303232 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:22.673679+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 14303232 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:23.673872+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 14303232 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:24.674045+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 14295040 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:25.674280+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 13246464 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:26.674501+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139848 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc1f7000/0x0/0x4ffc00000, data 0xd5484f/0xe34000, compress 0x0/0x0/0x0, omap 0x1974d, meta 0x2bb68b3), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc handle_mgr_map Got map version 12
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 13238272 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:27.674705+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 13238272 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:28.674978+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 13238272 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc handle_mgr_map Got map version 13
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:29.675129+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 13172736 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:30.675262+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 13172736 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:31.675458+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140582 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 13172736 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:32.675712+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc1f7000/0x0/0x4ffc00000, data 0xd548ea/0xe35000, compress 0x0/0x0/0x0, omap 0x1974d, meta 0x2bb68b3), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.337009430s of 12.364937782s, submitted: 12
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 13172736 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:33.675931+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 13172736 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:34.676127+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 13172736 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:35.676314+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 13164544 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:36.676518+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140406 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc1f7000/0x0/0x4ffc00000, data 0xd548bd/0xe35000, compress 0x0/0x0/0x0, omap 0x1974d, meta 0x2bb68b3), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 13156352 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:37.676659+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 13148160 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:38.676866+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 13148160 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:39.677041+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 13139968 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:40.677186+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 13099008 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:41.677361+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141524 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fc1f7000/0x0/0x4ffc00000, data 0xd54897/0xe35000, compress 0x0/0x0/0x0, omap 0x1974d, meta 0x2bb68b3), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 13099008 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:42.677526+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 13099008 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:43.677695+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.921016693s of 11.004405975s, submitted: 14
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 13074432 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:44.677874+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 13066240 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:45.678064+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fc1f0000/0x0/0x4ffc00000, data 0xd564f2/0xe39000, compress 0x0/0x0/0x0, omap 0x19a01, meta 0x2bb65ff), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 13066240 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:46.678244+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146422 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 13058048 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:47.678386+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 13058048 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:48.678527+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 13058048 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:49.678745+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fc1f0000/0x0/0x4ffc00000, data 0xd564c2/0xe38000, compress 0x0/0x0/0x0, omap 0x19a01, meta 0x2bb65ff), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 13058048 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:50.678914+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 13049856 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:51.679092+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc1ef000/0x0/0x4ffc00000, data 0xd57f41/0xe3b000, compress 0x0/0x0/0x0, omap 0x19d3b, meta 0x2bb62c5), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148446 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 13049856 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:52.679281+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 13049856 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:53.679460+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc1ef000/0x0/0x4ffc00000, data 0xd57f41/0xe3b000, compress 0x0/0x0/0x0, omap 0x19d3b, meta 0x2bb62c5), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.945789337s of 10.005077362s, submitted: 39
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 13172736 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:54.679651+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 13172736 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:55.679806+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 13172736 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:56.679919+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148590 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 13172736 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:57.680183+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 13172736 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:58.680422+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 13172736 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:15:59.680570+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc1f1000/0x0/0x4ffc00000, data 0xd57f41/0xe3b000, compress 0x0/0x0/0x0, omap 0x19d3b, meta 0x2bb62c5), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 13172736 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:00.680714+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 13172736 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:01.680980+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc1f1000/0x0/0x4ffc00000, data 0xd57f41/0xe3b000, compress 0x0/0x0/0x0, omap 0x19d3b, meta 0x2bb62c5), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147870 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 13164544 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:02.681201+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 13156352 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:03.681397+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 13156352 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:04.681581+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.998967171s of 11.010238647s, submitted: 6
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 12107776 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:05.681776+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc1f2000/0x0/0x4ffc00000, data 0xd57ea6/0xe3a000, compress 0x0/0x0/0x0, omap 0x19d3b, meta 0x2bb62c5), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:06.681947+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 12107776 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147296 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:07.682154+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 12099584 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc1f2000/0x0/0x4ffc00000, data 0xd57ea6/0xe3a000, compress 0x0/0x0/0x0, omap 0x19d3b, meta 0x2bb62c5), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc1f2000/0x0/0x4ffc00000, data 0xd57ea6/0xe3a000, compress 0x0/0x0/0x0, omap 0x19d3b, meta 0x2bb62c5), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:08.682325+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:09.682528+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:10.682658+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:11.682816+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146100 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:12.683036+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:13.683208+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc1f4000/0x0/0x4ffc00000, data 0xd57ddb/0xe38000, compress 0x0/0x0/0x0, omap 0x19d3b, meta 0x2bb62c5), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc1f4000/0x0/0x4ffc00000, data 0xd57ddb/0xe38000, compress 0x0/0x0/0x0, omap 0x19d3b, meta 0x2bb62c5), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:14.683378+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc1f4000/0x0/0x4ffc00000, data 0xd57ddb/0xe38000, compress 0x0/0x0/0x0, omap 0x19d3b, meta 0x2bb62c5), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:15.683552+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc1f4000/0x0/0x4ffc00000, data 0xd57ddb/0xe38000, compress 0x0/0x0/0x0, omap 0x19d3b, meta 0x2bb62c5), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.987800598s of 11.003258705s, submitted: 9
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:16.683665+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146116 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:17.683799+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:18.683976+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:19.684116+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc1f4000/0x0/0x4ffc00000, data 0xd57ddb/0xe38000, compress 0x0/0x0/0x0, omap 0x19d3b, meta 0x2bb62c5), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:20.684224+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc1f4000/0x0/0x4ffc00000, data 0xd57ddb/0xe38000, compress 0x0/0x0/0x0, omap 0x19d3b, meta 0x2bb62c5), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:21.684370+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc1f4000/0x0/0x4ffc00000, data 0xd57ddb/0xe38000, compress 0x0/0x0/0x0, omap 0x19d3b, meta 0x2bb62c5), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146116 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:22.684658+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc1f4000/0x0/0x4ffc00000, data 0xd57ddb/0xe38000, compress 0x0/0x0/0x0, omap 0x19d3b, meta 0x2bb62c5), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:23.684918+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:24.685105+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:25.685396+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:26.685536+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146116 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:27.685698+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread fragmentation_score=0.000137 took=0.000025s
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc1f4000/0x0/0x4ffc00000, data 0xd57ddb/0xe38000, compress 0x0/0x0/0x0, omap 0x19d3b, meta 0x2bb62c5), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc1f4000/0x0/0x4ffc00000, data 0xd57ddb/0xe38000, compress 0x0/0x0/0x0, omap 0x19d3b, meta 0x2bb62c5), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:28.685879+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:29.686078+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:30.686249+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc1f4000/0x0/0x4ffc00000, data 0xd57ddb/0xe38000, compress 0x0/0x0/0x0, omap 0x19d3b, meta 0x2bb62c5), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.008407593s of 15.011620522s, submitted: 2
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:31.686406+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146116 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:32.686578+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:33.686711+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc1f4000/0x0/0x4ffc00000, data 0xd57ddb/0xe38000, compress 0x0/0x0/0x0, omap 0x19d3b, meta 0x2bb62c5), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:34.686854+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fc1f4000/0x0/0x4ffc00000, data 0xd57ddb/0xe38000, compress 0x0/0x0/0x0, omap 0x19d3b, meta 0x2bb62c5), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:35.687030+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 12083200 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:36.687214+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 12075008 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149500 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:37.687374+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 12058624 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:38.687540+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 12017664 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:39.687724+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 12017664 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:40.687918+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 12017664 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fc1ed000/0x0/0x4ffc00000, data 0xd59b72/0xe3d000, compress 0x0/0x0/0x0, omap 0x19ff3, meta 0x2bb600d), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.920986176s of 10.006081581s, submitted: 31
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:41.688082+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 12009472 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152834 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:42.688245+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 11984896 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fc1ef000/0x0/0x4ffc00000, data 0xd59b70/0xe3d000, compress 0x0/0x0/0x0, omap 0x19ff3, meta 0x2bb600d), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:43.688382+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 11984896 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:44.688494+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 11976704 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:45.688631+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 11976704 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:46.688803+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 11976704 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152944 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:47.688944+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fc1ef000/0x0/0x4ffc00000, data 0xd59b44/0xe3d000, compress 0x0/0x0/0x0, omap 0x19ff3, meta 0x2bb600d), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 11976704 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:48.689136+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 11739136 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc handle_mgr_map Got map version 14
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:49.689283+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 11739136 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:50.689484+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 11755520 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 149 handle_osd_map epochs [149,150], i have 150, src has [1,150]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:51.689680+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 11755520 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc1eb000/0x0/0x4ffc00000, data 0xd5b528/0xe3f000, compress 0x0/0x0/0x0, omap 0x1a268, meta 0x2bb5d98), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155688 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:52.689877+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 11755520 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.476469040s of 11.519009590s, submitted: 157
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:53.690118+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 11755520 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:54.690407+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc1ea000/0x0/0x4ffc00000, data 0xd5b5c3/0xe40000, compress 0x0/0x0/0x0, omap 0x1a268, meta 0x2bb5d98), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 11747328 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:55.690633+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 11739136 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:56.690853+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 11739136 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157540 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:57.690984+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 11739136 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:58.691183+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 11739136 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: handle_auth_request added challenge on 0x563b65b36000
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:16:59.691391+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 11599872 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:00.691489+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc1e9000/0x0/0x4ffc00000, data 0xd5b87e/0xe43000, compress 0x0/0x0/0x0, omap 0x1a268, meta 0x2bb5d98), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 11599872 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc handle_mgr_map Got map version 15
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:01.691660+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 11583488 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159694 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:02.691839+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 11583488 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc1ec000/0x0/0x4ffc00000, data 0xd5b591/0xe40000, compress 0x0/0x0/0x0, omap 0x1a268, meta 0x2bb5d98), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:03.692049+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 11583488 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:04.692216+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 11575296 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:05.692428+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 11575296 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:06.692591+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 11575296 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159710 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:07.692781+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc1ec000/0x0/0x4ffc00000, data 0xd5b591/0xe40000, compress 0x0/0x0/0x0, omap 0x1a268, meta 0x2bb5d98), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 11575296 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:08.692968+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 11575296 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.484195709s of 16.510070801s, submitted: 15
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:09.693169+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 11575296 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:10.693350+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 11575296 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:11.693518+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 11575296 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159726 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:12.694043+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 11575296 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc1ec000/0x0/0x4ffc00000, data 0xd5b591/0xe40000, compress 0x0/0x0/0x0, omap 0x1a268, meta 0x2bb5d98), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:13.694251+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 11575296 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:14.694466+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 11575296 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:15.694721+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc1ec000/0x0/0x4ffc00000, data 0xd5b591/0xe40000, compress 0x0/0x0/0x0, omap 0x1a268, meta 0x2bb5d98), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 11575296 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:16.694897+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 11567104 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159550 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:17.695074+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 11599872 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:18.695288+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 11599872 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:19.695628+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 11599872 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.556398392s of 10.567049026s, submitted: 4
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc1ec000/0x0/0x4ffc00000, data 0xd5b591/0xe40000, compress 0x0/0x0/0x0, omap 0x1a268, meta 0x2bb5d98), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:20.695771+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 11599872 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc1ec000/0x0/0x4ffc00000, data 0xd5b591/0xe40000, compress 0x0/0x0/0x0, omap 0x1a268, meta 0x2bb5d98), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:21.695948+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 11370496 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162028 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:22.696135+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 11075584 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:23.696254+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 87343104 unmapped: 8077312 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 150 heartbeat osd_stat(store_statfs(0x4faffc000/0x0/0x4ffc00000, data 0xdaab89/0xe90000, compress 0x0/0x0/0x0, omap 0x1a268, meta 0x3d55d98), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:24.696389+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 87662592 unmapped: 7757824 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:25.696541+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 87662592 unmapped: 7757824 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:26.696708+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fafc2000/0x0/0x4ffc00000, data 0xde46db/0xeca000, compress 0x0/0x0/0x0, omap 0x1a268, meta 0x3d55d98), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 88186880 unmapped: 7233536 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174362 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:27.696846+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 88571904 unmapped: 6848512 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 150 heartbeat osd_stat(store_statfs(0x4faf9f000/0x0/0x4ffc00000, data 0xe07ade/0xeed000, compress 0x0/0x0/0x0, omap 0x1a268, meta 0x3d55d98), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:28.697042+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 88580096 unmapped: 6840320 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:29.697213+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 88096768 unmapped: 7323648 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.963760376s of 10.099595070s, submitted: 51
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:30.697363+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 89726976 unmapped: 5693440 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 151 heartbeat osd_stat(store_statfs(0x4faf1e000/0x0/0x4ffc00000, data 0xe85808/0xf6c000, compress 0x0/0x0/0x0, omap 0x1a523, meta 0x3d55add), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:31.697544+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 6029312 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181080 data_alloc: 218103808 data_used: 6222
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:32.697709+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 6029312 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:33.697861+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 5767168 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:34.698012+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 90357760 unmapped: 5062656 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:35.698202+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 90357760 unmapped: 5062656 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:36.698450+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 90103808 unmapped: 5316608 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184038 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 153 heartbeat osd_stat(store_statfs(0x4faeb5000/0x0/0x4ffc00000, data 0xeee1ba/0xfd5000, compress 0x0/0x0/0x0, omap 0x1a83a, meta 0x3d557c6), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:37.698609+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 91897856 unmapped: 3522560 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:38.706049+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 91897856 unmapped: 3522560 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:39.706285+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 3743744 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.126952171s of 10.349292755s, submitted: 126
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:40.706687+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fae46000/0x0/0x4ffc00000, data 0xf5c8bf/0x1046000, compress 0x0/0x0/0x0, omap 0x1aaf9, meta 0x3d55507), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 3743744 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:41.706869+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 92160000 unmapped: 3260416 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198818 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:42.707042+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 92504064 unmapped: 2916352 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:43.707203+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 92504064 unmapped: 2916352 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:44.707390+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fae08000/0x0/0x4ffc00000, data 0xf9d0dc/0x1084000, compress 0x0/0x0/0x0, omap 0x1aaf9, meta 0x3d55507), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 92610560 unmapped: 2809856 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:45.707674+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 93249536 unmapped: 2170880 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:46.707881+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fada7000/0x0/0x4ffc00000, data 0xff792d/0x10e1000, compress 0x0/0x0/0x0, omap 0x1b0a5, meta 0x3d54f5b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 93388800 unmapped: 2031616 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fada7000/0x0/0x4ffc00000, data 0xff792d/0x10e1000, compress 0x0/0x0/0x0, omap 0x1b0a5, meta 0x3d54f5b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201794 data_alloc: 218103808 data_used: 6071
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:47.708068+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fada7000/0x0/0x4ffc00000, data 0xff792d/0x10e1000, compress 0x0/0x0/0x0, omap 0x1b0a5, meta 0x3d54f5b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 93388800 unmapped: 2031616 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:48.708254+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 2383872 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:49.708464+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 93265920 unmapped: 2154496 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:50.708670+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2334720 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fad59000/0x0/0x4ffc00000, data 0x10486dc/0x1133000, compress 0x0/0x0/0x0, omap 0x1b0a5, meta 0x3d54f5b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.846508026s of 11.055044174s, submitted: 92
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:51.708983+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fad59000/0x0/0x4ffc00000, data 0x10486dc/0x1133000, compress 0x0/0x0/0x0, omap 0x1b0a5, meta 0x3d54f5b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 93184000 unmapped: 2236416 heap: 95420416 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211294 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:52.709220+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 93192192 unmapped: 3276800 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:53.709443+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 93356032 unmapped: 3112960 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:54.709672+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94322688 unmapped: 2146304 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:55.709843+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94994432 unmapped: 1474560 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 155 handle_osd_map epochs [155,156], i have 156, src has [1,156]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 156 heartbeat osd_stat(store_statfs(0x4facd2000/0x0/0x4ffc00000, data 0x10cf2f6/0x11ba000, compress 0x0/0x0/0x0, omap 0x1b0a5, meta 0x3d54f5b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:56.709960+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 95100928 unmapped: 2416640 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc handle_mgr_map Got map version 16
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231980 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:57.710174+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 1720320 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:58.710362+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94642176 unmapped: 2875392 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fac7b000/0x0/0x4ffc00000, data 0x1123f11/0x1211000, compress 0x0/0x0/0x0, omap 0x1b5fb, meta 0x3d54a05), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:17:59.710519+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94650368 unmapped: 2867200 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:00.710749+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 2686976 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:01.710985+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 2686976 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226776 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.372348785s of 10.977797508s, submitted: 88
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:02.711160+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 2686976 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:03.711287+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94838784 unmapped: 2678784 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:04.711447+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94838784 unmapped: 2678784 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fac37000/0x0/0x4ffc00000, data 0x1165be9/0x1255000, compress 0x0/0x0/0x0, omap 0x1b5fb, meta 0x3d54a05), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:05.716722+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94838784 unmapped: 2678784 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:06.716915+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fac37000/0x0/0x4ffc00000, data 0x11659ed/0x1252000, compress 0x0/0x0/0x0, omap 0x1b5fb, meta 0x3d54a05), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94846976 unmapped: 2670592 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223910 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:07.717039+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fac37000/0x0/0x4ffc00000, data 0x11659ed/0x1252000, compress 0x0/0x0/0x0, omap 0x1b5fb, meta 0x3d54a05), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94846976 unmapped: 2670592 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fac37000/0x0/0x4ffc00000, data 0x11659ed/0x1252000, compress 0x0/0x0/0x0, omap 0x1b5fb, meta 0x3d54a05), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:08.717178+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94846976 unmapped: 2670592 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fac3a000/0x0/0x4ffc00000, data 0x11659ed/0x1252000, compress 0x0/0x0/0x0, omap 0x1b5fb, meta 0x3d54a05), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:09.717382+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94846976 unmapped: 2670592 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:10.717583+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94846976 unmapped: 2670592 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:11.717779+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94846976 unmapped: 2670592 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228122 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.671333313s of 10.155095100s, submitted: 31
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:12.717941+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94855168 unmapped: 2662400 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:13.718087+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94855168 unmapped: 2662400 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:14.718247+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fac34000/0x0/0x4ffc00000, data 0x116768d/0x1256000, compress 0x0/0x0/0x0, omap 0x1b8c1, meta 0x3d5473f), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94855168 unmapped: 2662400 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fac34000/0x0/0x4ffc00000, data 0x116768d/0x1256000, compress 0x0/0x0/0x0, omap 0x1b8c1, meta 0x3d5473f), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:15.718457+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94863360 unmapped: 2654208 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:16.718634+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94863360 unmapped: 2654208 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226540 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:17.718773+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94863360 unmapped: 2654208 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:18.719009+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94863360 unmapped: 2654208 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:19.719204+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94863360 unmapped: 2654208 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:20.719397+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fac33000/0x0/0x4ffc00000, data 0x11676bb/0x1256000, compress 0x0/0x0/0x0, omap 0x1b8c1, meta 0x3d5473f), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94879744 unmapped: 2637824 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:21.719575+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fac31000/0x0/0x4ffc00000, data 0x116913a/0x1259000, compress 0x0/0x0/0x0, omap 0x1bb13, meta 0x3d544ed), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94896128 unmapped: 2621440 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235762 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:22.719835+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94904320 unmapped: 2613248 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.606031418s of 10.834803581s, submitted: 46
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:23.720075+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94904320 unmapped: 2613248 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:24.720231+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94904320 unmapped: 2613248 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:25.720539+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94904320 unmapped: 2613248 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:26.720788+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x116abdb/0x125a000, compress 0x0/0x0/0x0, omap 0x1bddc, meta 0x3d54224), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94904320 unmapped: 2613248 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233398 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:27.720959+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94904320 unmapped: 2613248 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:28.721105+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94904320 unmapped: 2613248 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:29.721260+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94904320 unmapped: 2613248 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x116abdb/0x125a000, compress 0x0/0x0/0x0, omap 0x1bddc, meta 0x3d54224), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:30.721384+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x116abdb/0x125a000, compress 0x0/0x0/0x0, omap 0x1bddc, meta 0x3d54224), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 159 handle_osd_map epochs [159,160], i have 160, src has [1,160]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 2596864 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:31.721535+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 2596864 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236748 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:32.721664+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94937088 unmapped: 2580480 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:33.721794+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94937088 unmapped: 2580480 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.984370232s of 11.008815765s, submitted: 17
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:34.721949+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94937088 unmapped: 2580480 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:35.722186+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94937088 unmapped: 2580480 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:36.722370+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x116c6f5/0x125e000, compress 0x0/0x0/0x0, omap 0x1c113, meta 0x3d53eed), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94937088 unmapped: 2580480 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237720 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:37.722541+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94937088 unmapped: 2580480 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:38.722736+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94937088 unmapped: 2580480 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:39.722902+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94937088 unmapped: 2580480 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac2d000/0x0/0x4ffc00000, data 0x116c790/0x125f000, compress 0x0/0x0/0x0, omap 0x1c113, meta 0x3d53eed), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:40.723039+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94937088 unmapped: 2580480 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:41.723177+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94945280 unmapped: 2572288 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240226 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:42.723326+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94945280 unmapped: 2572288 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac2d000/0x0/0x4ffc00000, data 0x116c7bd/0x125f000, compress 0x0/0x0/0x0, omap 0x1c113, meta 0x3d53eed), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:43.723518+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94945280 unmapped: 2572288 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:44.723711+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac2d000/0x0/0x4ffc00000, data 0x116c7bd/0x125f000, compress 0x0/0x0/0x0, omap 0x1c113, meta 0x3d53eed), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.824722290s of 10.842454910s, submitted: 9
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2564096 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:45.723936+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2564096 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:46.724123+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2564096 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241328 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:47.724338+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2564096 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac2d000/0x0/0x4ffc00000, data 0x116c790/0x125f000, compress 0x0/0x0/0x0, omap 0x1c113, meta 0x3d53eed), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:48.724477+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2564096 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:49.724600+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2564096 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:50.724771+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2564096 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:51.724928+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2564096 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239636 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:52.725103+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x116c6f5/0x125e000, compress 0x0/0x0/0x0, omap 0x1c113, meta 0x3d53eed), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2564096 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:53.725259+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x116c6f5/0x125e000, compress 0x0/0x0/0x0, omap 0x1c113, meta 0x3d53eed), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2564096 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:54.726068+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94969856 unmapped: 2547712 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.164185524s of 10.176877975s, submitted: 7
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:55.726330+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94969856 unmapped: 2547712 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:56.726515+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94969856 unmapped: 2547712 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239668 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:57.726686+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x116c6f5/0x125e000, compress 0x0/0x0/0x0, omap 0x1c113, meta 0x3d53eed), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94978048 unmapped: 2539520 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:58.727081+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94978048 unmapped: 2539520 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:18:59.728805+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94978048 unmapped: 2539520 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:00.728991+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94978048 unmapped: 2539520 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:01.730432+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac2f000/0x0/0x4ffc00000, data 0x116c65a/0x125d000, compress 0x0/0x0/0x0, omap 0x1c113, meta 0x3d53eed), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94978048 unmapped: 2539520 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac2f000/0x0/0x4ffc00000, data 0x116c65a/0x125d000, compress 0x0/0x0/0x0, omap 0x1c113, meta 0x3d53eed), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239062 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:02.730753+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94978048 unmapped: 2539520 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:03.730896+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94978048 unmapped: 2539520 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:04.731499+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94978048 unmapped: 2539520 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.995107651s of 10.009556770s, submitted: 8
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:05.731690+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94978048 unmapped: 2539520 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:06.731839+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac2f000/0x0/0x4ffc00000, data 0x116c65a/0x125d000, compress 0x0/0x0/0x0, omap 0x1c113, meta 0x3d53eed), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94978048 unmapped: 2539520 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239062 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:07.732628+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94986240 unmapped: 2531328 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:08.732927+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 94986240 unmapped: 2531328 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:09.733370+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 96043008 unmapped: 1474560 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:10.733796+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 96043008 unmapped: 1474560 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:11.734162+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x116c6f5/0x125e000, compress 0x0/0x0/0x0, omap 0x1c113, meta 0x3d53eed), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 96043008 unmapped: 1474560 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:12.734355+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240738 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 96043008 unmapped: 1474560 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:13.734637+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 96043008 unmapped: 1474560 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:14.734951+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 160 handle_osd_map epochs [160,161], i have 161, src has [1,161]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.962542534s of 10.019806862s, submitted: 76
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 96051200 unmapped: 1466368 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:15.735185+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 95895552 unmapped: 1622016 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fac2d000/0x0/0x4ffc00000, data 0x116e1f4/0x125f000, compress 0x0/0x0/0x0, omap 0x1c3d7, meta 0x3d53c29), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:16.735379+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 95895552 unmapped: 1622016 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:17.735615+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241694 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 95895552 unmapped: 1622016 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:18.735801+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 161 ms_handle_reset con 0x563b65b36000 session 0x563b6563cc40
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc handle_mgr_map Got map version 17
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97132544 unmapped: 1433600 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:19.736019+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97132544 unmapped: 1433600 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:20.736398+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97148928 unmapped: 1417216 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:21.736601+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fac27000/0x0/0x4ffc00000, data 0x116fd2e/0x1263000, compress 0x0/0x0/0x0, omap 0x1c7b7, meta 0x3d53849), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97148928 unmapped: 1417216 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:22.736922+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246736 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97148928 unmapped: 1417216 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fac27000/0x0/0x4ffc00000, data 0x116fd2e/0x1263000, compress 0x0/0x0/0x0, omap 0x1c7b7, meta 0x3d53849), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:23.737200+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97148928 unmapped: 1417216 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:24.737436+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97148928 unmapped: 1417216 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:25.737660+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97148928 unmapped: 1417216 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:26.737817+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97148928 unmapped: 1417216 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:27.737952+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247708 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97148928 unmapped: 1417216 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:28.738194+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fac28000/0x0/0x4ffc00000, data 0x116fd8b/0x1264000, compress 0x0/0x0/0x0, omap 0x1c7b7, meta 0x3d53849), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.434858322s of 13.463528633s, submitted: 150
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97157120 unmapped: 1409024 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:29.738365+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97157120 unmapped: 1409024 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:30.738532+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97157120 unmapped: 1409024 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:31.738713+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97157120 unmapped: 1409024 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:32.738880+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247724 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97157120 unmapped: 1409024 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:33.739044+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97157120 unmapped: 1409024 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:34.739892+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fac27000/0x0/0x4ffc00000, data 0x116fd5e/0x1264000, compress 0x0/0x0/0x0, omap 0x1c7b7, meta 0x3d53849), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97157120 unmapped: 1409024 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:35.740089+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97157120 unmapped: 1409024 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:36.740250+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97157120 unmapped: 1409024 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:37.740353+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fac28000/0x0/0x4ffc00000, data 0x116fd5e/0x1264000, compress 0x0/0x0/0x0, omap 0x1c7b7, meta 0x3d53849), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247580 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97157120 unmapped: 1409024 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:38.740503+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fac28000/0x0/0x4ffc00000, data 0x116fd5e/0x1264000, compress 0x0/0x0/0x0, omap 0x1c7b7, meta 0x3d53849), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.529351234s of 10.545085907s, submitted: 8
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97042432 unmapped: 1523712 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:39.740708+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97050624 unmapped: 1515520 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:40.740864+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97050624 unmapped: 1515520 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:41.741025+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97050624 unmapped: 1515520 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:42.741184+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251058 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fac23000/0x0/0x4ffc00000, data 0x1171963/0x1267000, compress 0x0/0x0/0x0, omap 0x1ca87, meta 0x3d53579), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97050624 unmapped: 1515520 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:43.741342+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fac23000/0x0/0x4ffc00000, data 0x1171963/0x1267000, compress 0x0/0x0/0x0, omap 0x1ca87, meta 0x3d53579), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97050624 unmapped: 1515520 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:44.741518+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97050624 unmapped: 1515520 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:45.741671+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac25000/0x0/0x4ffc00000, data 0x1171963/0x1267000, compress 0x0/0x0/0x0, omap 0x1ca87, meta 0x3d53579), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:46.741809+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97050624 unmapped: 1515520 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac20000/0x0/0x4ffc00000, data 0x11733e2/0x126a000, compress 0x0/0x0/0x0, omap 0x1ccdb, meta 0x3d53325), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:47.742015+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97050624 unmapped: 1515520 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253688 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac20000/0x0/0x4ffc00000, data 0x11733e2/0x126a000, compress 0x0/0x0/0x0, omap 0x1ccdb, meta 0x3d53325), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac20000/0x0/0x4ffc00000, data 0x11733e2/0x126a000, compress 0x0/0x0/0x0, omap 0x1ccdb, meta 0x3d53325), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:48.742169+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97050624 unmapped: 1515520 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:49.742310+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97050624 unmapped: 1515520 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.615376472s of 10.689306259s, submitted: 35
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:50.742429+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97296384 unmapped: 1269760 heap: 98566144 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:51.742569+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97296384 unmapped: 2318336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:52.742737+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97329152 unmapped: 2285568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256934 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:53.742986+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97329152 unmapped: 2285568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fac0d000/0x0/0x4ffc00000, data 0x1187463/0x127f000, compress 0x0/0x0/0x0, omap 0x1ccdb, meta 0x3d53325), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:54.743111+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97370112 unmapped: 2244608 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:55.743332+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97599488 unmapped: 2015232 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 165 handle_osd_map epochs [165,166], i have 165, src has [1,166]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:56.743495+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 1957888 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:57.743668+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 1949696 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267980 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fabba000/0x0/0x4ffc00000, data 0x11d4965/0x12ce000, compress 0x0/0x0/0x0, omap 0x1d283, meta 0x3d52d7d), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:58.743812+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97804288 unmapped: 1810432 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:19:59.743953+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97804288 unmapped: 1810432 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:00.744113+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97730560 unmapped: 1884160 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.178684235s of 11.319760323s, submitted: 78
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:01.744289+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97280000 unmapped: 2334720 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fab71000/0x0/0x4ffc00000, data 0x121e25d/0x1319000, compress 0x0/0x0/0x0, omap 0x1d4e5, meta 0x3d52b1b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:02.744515+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97280000 unmapped: 2334720 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269974 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:03.744675+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97280000 unmapped: 2334720 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:04.744862+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97206272 unmapped: 2408448 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:05.745210+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97132544 unmapped: 2482176 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:06.745378+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97132544 unmapped: 2482176 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fab4f000/0x0/0x4ffc00000, data 0x124184f/0x133d000, compress 0x0/0x0/0x0, omap 0x1d4e5, meta 0x3d52b1b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:07.745568+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97132544 unmapped: 2482176 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271302 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:08.745799+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 2301952 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:09.746011+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98852864 unmapped: 761856 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:10.746158+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 168 heartbeat osd_stat(store_statfs(0x4faafe000/0x0/0x4ffc00000, data 0x12907cb/0x138e000, compress 0x0/0x0/0x0, omap 0x1d4e5, meta 0x3d52b1b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 1818624 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:11.746389+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 1818624 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:12.746592+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 1818624 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278656 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.494337082s of 11.583576202s, submitted: 50
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:13.746794+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97779712 unmapped: 1835008 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 169 heartbeat osd_stat(store_statfs(0x4faaf7000/0x0/0x4ffc00000, data 0x1293f72/0x1393000, compress 0x0/0x0/0x0, omap 0x1db3f, meta 0x3d524c1), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:14.747148+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 1925120 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:15.747349+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 1925120 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 169 heartbeat osd_stat(store_statfs(0x4faab5000/0x0/0x4ffc00000, data 0x12d5f9a/0x13d5000, compress 0x0/0x0/0x0, omap 0x1db3f, meta 0x3d524c1), peers [1,2] op hist [0,1])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:16.747974+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 1720320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 169 heartbeat osd_stat(store_statfs(0x4faab3000/0x0/0x4ffc00000, data 0x12dae30/0x13d9000, compress 0x0/0x0/0x0, omap 0x1db3f, meta 0x3d524c1), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:17.748176+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 1720320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283942 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:18.748355+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 1720320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:19.748554+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98279424 unmapped: 2383872 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:20.748755+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98279424 unmapped: 2383872 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:21.748931+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98279424 unmapped: 2383872 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:22.749110+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 169 handle_osd_map epochs [170,171], i have 169, src has [1,171]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 2179072 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 171 heartbeat osd_stat(store_statfs(0x4faa7b000/0x0/0x4ffc00000, data 0x130e854/0x140f000, compress 0x0/0x0/0x0, omap 0x1dd9d, meta 0x3d52263), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291886 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:23.749262+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 2179072 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:24.749461+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.526525497s of 11.667844772s, submitted: 79
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 1949696 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:25.749639+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 1884160 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:26.749807+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98844672 unmapped: 1818624 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 171 heartbeat osd_stat(store_statfs(0x4faa6c000/0x0/0x4ffc00000, data 0x131e483/0x141e000, compress 0x0/0x0/0x0, omap 0x1dd9d, meta 0x3d52263), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:27.750008+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98844672 unmapped: 1818624 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292982 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 171 heartbeat osd_stat(store_statfs(0x4faa6c000/0x0/0x4ffc00000, data 0x131e483/0x141e000, compress 0x0/0x0/0x0, omap 0x1dd9d, meta 0x3d52263), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:28.750271+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98844672 unmapped: 1818624 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:29.750451+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98844672 unmapped: 1818624 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:30.750584+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98844672 unmapped: 1818624 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 8950 writes, 33K keys, 8950 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 8950 writes, 2269 syncs, 3.94 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3087 writes, 8468 keys, 3087 commit groups, 1.0 writes per commit group, ingest: 9.99 MB, 0.02 MB/s
                                           Interval WAL: 3087 writes, 1257 syncs, 2.46 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 172 heartbeat osd_stat(store_statfs(0x4faa66000/0x0/0x4ffc00000, data 0x1324393/0x1424000, compress 0x0/0x0/0x0, omap 0x1dd9d, meta 0x3d52263), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:31.750763+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98369536 unmapped: 2293760 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:32.750978+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98369536 unmapped: 2293760 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295372 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 172 heartbeat osd_stat(store_statfs(0x4faa63000/0x0/0x4ffc00000, data 0x1325e32/0x1427000, compress 0x0/0x0/0x0, omap 0x1e175, meta 0x3d51e8b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:33.751156+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98369536 unmapped: 2293760 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:34.751356+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98369536 unmapped: 2293760 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:35.751558+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98369536 unmapped: 2293760 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc ms_handle_reset ms_handle_reset con 0x563b62c1c800
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3695062931
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: get_auth_request con 0x563b65a91800 auth_method 0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc handle_mgr_configure stats_period=5
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:36.751722+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:37.751885+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295372 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 172 heartbeat osd_stat(store_statfs(0x4faa63000/0x0/0x4ffc00000, data 0x1325e32/0x1427000, compress 0x0/0x0/0x0, omap 0x1e175, meta 0x3d51e8b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:38.752037+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:39.752153+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:40.752278+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 172 ms_handle_reset con 0x563b62bf2c00 session 0x563b65682380
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: handle_auth_request added challenge on 0x563b62bf2000
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 172 heartbeat osd_stat(store_statfs(0x4faa63000/0x0/0x4ffc00000, data 0x1325e32/0x1427000, compress 0x0/0x0/0x0, omap 0x1e175, meta 0x3d51e8b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:41.752434+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:42.752561+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295372 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:43.752740+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:44.752863+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:45.753042+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:46.753165+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 172 heartbeat osd_stat(store_statfs(0x4faa63000/0x0/0x4ffc00000, data 0x1325e32/0x1427000, compress 0x0/0x0/0x0, omap 0x1e175, meta 0x3d51e8b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 172 heartbeat osd_stat(store_statfs(0x4faa63000/0x0/0x4ffc00000, data 0x1325e32/0x1427000, compress 0x0/0x0/0x0, omap 0x1e175, meta 0x3d51e8b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:47.753394+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295372 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:48.753560+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 172 heartbeat osd_stat(store_statfs(0x4faa63000/0x0/0x4ffc00000, data 0x1325e32/0x1427000, compress 0x0/0x0/0x0, omap 0x1e175, meta 0x3d51e8b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:49.753730+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:50.753849+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:51.753988+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 172 heartbeat osd_stat(store_statfs(0x4faa63000/0x0/0x4ffc00000, data 0x1325e32/0x1427000, compress 0x0/0x0/0x0, omap 0x1e175, meta 0x3d51e8b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:52.754137+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295372 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 172 heartbeat osd_stat(store_statfs(0x4faa63000/0x0/0x4ffc00000, data 0x1325e32/0x1427000, compress 0x0/0x0/0x0, omap 0x1e175, meta 0x3d51e8b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:53.754337+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:54.754493+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:55.754664+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 172 heartbeat osd_stat(store_statfs(0x4faa63000/0x0/0x4ffc00000, data 0x1325e32/0x1427000, compress 0x0/0x0/0x0, omap 0x1e175, meta 0x3d51e8b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:56.754844+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:57.755006+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295372 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:58.755184+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:20:59.755356+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:00.755532+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:01.755704+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 172 heartbeat osd_stat(store_statfs(0x4faa63000/0x0/0x4ffc00000, data 0x1325e32/0x1427000, compress 0x0/0x0/0x0, omap 0x1e175, meta 0x3d51e8b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:02.755894+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295372 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:03.756033+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:04.756220+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:05.756411+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:06.756694+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:07.756869+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 172 heartbeat osd_stat(store_statfs(0x4faa63000/0x0/0x4ffc00000, data 0x1325e32/0x1427000, compress 0x0/0x0/0x0, omap 0x1e175, meta 0x3d51e8b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295372 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:08.757064+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 2195456 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 44.880153656s of 44.894542694s, submitted: 14
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:09.757226+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 1908736 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 172 heartbeat osd_stat(store_statfs(0x4faa49000/0x0/0x4ffc00000, data 0x1341c45/0x1443000, compress 0x0/0x0/0x0, omap 0x1e175, meta 0x3d51e8b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:10.757383+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98861056 unmapped: 1802240 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 172 heartbeat osd_stat(store_statfs(0x4faa42000/0x0/0x4ffc00000, data 0x1348b43/0x144a000, compress 0x0/0x0/0x0, omap 0x1e175, meta 0x3d51e8b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:11.757512+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98861056 unmapped: 1802240 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:12.757643+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98861056 unmapped: 1802240 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296908 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:13.757816+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 1712128 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:14.757988+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 1712128 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:15.758216+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 1679360 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:16.758407+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 1679360 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 172 heartbeat osd_stat(store_statfs(0x4faa05000/0x0/0x4ffc00000, data 0x1385c4d/0x1487000, compress 0x0/0x0/0x0, omap 0x1e175, meta 0x3d51e8b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 172 heartbeat osd_stat(store_statfs(0x4faa05000/0x0/0x4ffc00000, data 0x1385c4d/0x1487000, compress 0x0/0x0/0x0, omap 0x1e175, meta 0x3d51e8b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:17.758572+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 1679360 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 172 heartbeat osd_stat(store_statfs(0x4faa05000/0x0/0x4ffc00000, data 0x1385c4d/0x1487000, compress 0x0/0x0/0x0, omap 0x1e175, meta 0x3d51e8b), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1300324 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:18.758775+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 100196352 unmapped: 1515520 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:19.758985+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.056054115s of 10.106918335s, submitted: 22
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1679360 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:20.759128+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1671168 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa9d9000/0x0/0x4ffc00000, data 0x13ae336/0x14b1000, compress 0x0/0x0/0x0, omap 0x1e454, meta 0x3d51bac), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:21.759253+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc handle_mgr_map Got map version 18
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 2613248 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:22.759413+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 100196352 unmapped: 2564096 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305890 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:23.759584+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc handle_mgr_map Got map version 19
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 100229120 unmapped: 2531328 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:24.759765+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa988000/0x0/0x4ffc00000, data 0x1401464/0x1504000, compress 0x0/0x0/0x0, omap 0x1e454, meta 0x3d51bac), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 100327424 unmapped: 2433024 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:25.759992+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 100327424 unmapped: 2433024 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:26.760164+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 100327424 unmapped: 2433024 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:27.760350+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 2408448 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304890 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:28.760554+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 2408448 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:29.760693+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.671206474s of 10.001389503s, submitted: 144
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 100139008 unmapped: 2621440 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 173 heartbeat osd_stat(store_statfs(0x4fa96b000/0x0/0x4ffc00000, data 0x141dffa/0x1521000, compress 0x0/0x0/0x0, omap 0x1e454, meta 0x3d51bac), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:30.760958+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 100171776 unmapped: 2588672 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 173 handle_osd_map epochs [174,174], i have 173, src has [1,174]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:31.761148+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 99287040 unmapped: 3473408 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 174 heartbeat osd_stat(store_statfs(0x4fa966000/0x0/0x4ffc00000, data 0x141fa79/0x1524000, compress 0x0/0x0/0x0, omap 0x1e6fd, meta 0x3d51903), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:32.761432+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 99287040 unmapped: 3473408 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309696 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:33.761615+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 99287040 unmapped: 3473408 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:34.761785+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 99287040 unmapped: 3473408 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:35.761989+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 99336192 unmapped: 3424256 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:36.762122+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 174 heartbeat osd_stat(store_statfs(0x4fa94a000/0x0/0x4ffc00000, data 0x143c83f/0x1542000, compress 0x0/0x0/0x0, omap 0x1e6fd, meta 0x3d51903), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 99336192 unmapped: 3424256 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:37.762354+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 99336192 unmapped: 3424256 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313404 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:38.762510+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 99450880 unmapped: 3309568 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:39.762681+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.951098442s of 10.001032829s, submitted: 34
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 99450880 unmapped: 3309568 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 174 heartbeat osd_stat(store_statfs(0x4fa930000/0x0/0x4ffc00000, data 0x1456ab5/0x155c000, compress 0x0/0x0/0x0, omap 0x1e6fd, meta 0x3d51903), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:40.762859+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 3244032 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:41.762979+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 3244032 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:42.763119+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 3244032 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313524 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:43.763222+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 3153920 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:44.763374+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 174 heartbeat osd_stat(store_statfs(0x4fa8db000/0x0/0x4ffc00000, data 0x14ab7a2/0x15b1000, compress 0x0/0x0/0x0, omap 0x1e6fd, meta 0x3d51903), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 99688448 unmapped: 3072000 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:45.763551+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 99721216 unmapped: 3039232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:46.763657+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 99803136 unmapped: 2957312 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:47.763784+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 99983360 unmapped: 2777088 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317798 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:48.763945+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 174 heartbeat osd_stat(store_statfs(0x4fa8a5000/0x0/0x4ffc00000, data 0x14e2e38/0x15e7000, compress 0x0/0x0/0x0, omap 0x1e6fd, meta 0x3d51903), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 99983360 unmapped: 2777088 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:49.764444+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.959295273s of 10.000157356s, submitted: 22
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 174 handle_osd_map epochs [175,175], i have 174, src has [1,175]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 175 heartbeat osd_stat(store_statfs(0x4fa89a000/0x0/0x4ffc00000, data 0x14eac18/0x15f0000, compress 0x0/0x0/0x0, omap 0x1e9df, meta 0x3d51621), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 100114432 unmapped: 2646016 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:50.764567+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 100114432 unmapped: 2646016 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:51.764731+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 100114432 unmapped: 2646016 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:52.764880+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 100114432 unmapped: 2646016 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321100 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:53.765019+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 100114432 unmapped: 2646016 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:54.765182+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101597184 unmapped: 1163264 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:55.765367+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 175 heartbeat osd_stat(store_statfs(0x4fa86d000/0x0/0x4ffc00000, data 0x151981f/0x161f000, compress 0x0/0x0/0x0, omap 0x1e9df, meta 0x3d51621), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101793792 unmapped: 966656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 175 handle_osd_map epochs [176,176], i have 175, src has [1,176]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:56.765546+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0x151b29e/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0x151b29e/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:57.765699+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324902 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:58.765836+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0x151b29e/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:21:59.766015+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:00.766179+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:01.766383+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:02.766535+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _renew_subs
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324902 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:03.766736+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0x151b29e/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:04.766898+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:05.767092+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0x151b29e/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:06.767218+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:07.767378+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324902 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0x151b29e/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:08.767508+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:09.767680+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:10.767855+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:11.768009+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:12.768180+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324902 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:13.768380+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0x151b29e/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:14.768607+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:15.768831+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:16.768986+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0x151b29e/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:17.769231+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324902 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:18.769401+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0x151b29e/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:19.769700+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:20.769925+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0x151b29e/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:21.770136+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:22.770370+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0x151b29e/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324902 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:23.770549+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:24.770745+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:25.770987+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0x151b29e/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:26.771170+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:27.771380+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0x151b29e/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324902 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:28.771615+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:29.771798+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:30.771989+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:31.772150+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0x151b29e/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:32.772361+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324902 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:33.772539+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:34.772723+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:35.772974+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0x151b29e/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:36.773145+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:37.773271+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324902 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:38.773432+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:39.773575+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0x151b29e/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:40.773737+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:41.773972+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:42.774138+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324902 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0x151b29e/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:43.774342+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0x151b29e/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:44.774497+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:45.774719+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:46.774889+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa868000/0x0/0x4ffc00000, data 0x151b29e/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:47.775119+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 1146880 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324902 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:48.775318+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 59.125205994s of 59.182971954s, submitted: 42
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 ms_handle_reset con 0x563b65b39c00 session 0x563b65616a80
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101785600 unmapped: 2023424 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:49.775609+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101785600 unmapped: 2023424 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa86a000/0x0/0x4ffc00000, data 0x151b4b1/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:50.775751+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101785600 unmapped: 2023424 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:51.775899+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc handle_mgr_map Got map version 20
Feb 01 15:24:00 compute-0 ceph-osd[85969]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/3695062931,v1:192.168.122.100:6801/3695062931]
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:52.776048+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa86a000/0x0/0x4ffc00000, data 0x151b4b1/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324374 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:53.776206+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa86a000/0x0/0x4ffc00000, data 0x151b4b1/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:54.776370+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa86a000/0x0/0x4ffc00000, data 0x151b4b1/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa86a000/0x0/0x4ffc00000, data 0x151b4b1/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:55.776672+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:56.776817+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:57.777120+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa86a000/0x0/0x4ffc00000, data 0x151b4b1/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324374 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:58.777528+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa86a000/0x0/0x4ffc00000, data 0x151b4b1/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:22:59.777723+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:00.777930+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa86a000/0x0/0x4ffc00000, data 0x151b4b1/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:01.778126+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:02.778918+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa86a000/0x0/0x4ffc00000, data 0x151b4b1/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324374 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:03.779103+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:04.779255+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa86a000/0x0/0x4ffc00000, data 0x151b4b1/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:05.779505+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa86a000/0x0/0x4ffc00000, data 0x151b4b1/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:06.779652+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:07.779853+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa86a000/0x0/0x4ffc00000, data 0x151b4b1/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324374 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:08.780057+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:09.780251+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:10.780418+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:11.780530+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa86a000/0x0/0x4ffc00000, data 0x151b4b1/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:12.780679+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324374 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:13.780805+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:14.780931+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa86a000/0x0/0x4ffc00000, data 0x151b4b1/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:15.781076+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:16.781194+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:17.781364+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa86a000/0x0/0x4ffc00000, data 0x151b4b1/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:18.781565+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101810176 unmapped: 1998848 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324374 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:19.781728+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101818368 unmapped: 1990656 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:20.781922+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101818368 unmapped: 1990656 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa86a000/0x0/0x4ffc00000, data 0x151b4b1/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:21.782405+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101818368 unmapped: 1990656 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:22.782550+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101818368 unmapped: 1990656 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:23.782683+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101818368 unmapped: 1990656 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324374 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:24.782805+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fa86a000/0x0/0x4ffc00000, data 0x151b4b1/0x1622000, compress 0x0/0x0/0x0, omap 0x1ec89, meta 0x3d51377), peers [1,2] op hist [])
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101818368 unmapped: 1990656 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:25.783374+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101818368 unmapped: 1990656 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:26.783813+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101851136 unmapped: 1957888 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: do_command 'config diff' '{prefix=config diff}'
Feb 01 15:24:00 compute-0 ceph-osd[85969]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 01 15:24:00 compute-0 ceph-osd[85969]: do_command 'config show' '{prefix=config show}'
Feb 01 15:24:00 compute-0 ceph-osd[85969]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 01 15:24:00 compute-0 ceph-osd[85969]: do_command 'counter dump' '{prefix=counter dump}'
Feb 01 15:24:00 compute-0 ceph-osd[85969]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:27.783939+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: do_command 'counter schema' '{prefix=counter schema}'
Feb 01 15:24:00 compute-0 ceph-osd[85969]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101679104 unmapped: 3178496 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:28.784060+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 01 15:24:00 compute-0 ceph-osd[85969]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 01 15:24:00 compute-0 ceph-osd[85969]: prioritycache tune_memory target: 4294967296 mapped: 101859328 unmapped: 2998272 heap: 104857600 old mem: 2845415832 new mem: 2845415832
Feb 01 15:24:00 compute-0 ceph-osd[85969]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324374 data_alloc: 218103808 data_used: 6721
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: tick
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_tickets
Feb 01 15:24:00 compute-0 ceph-osd[85969]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-01T15:23:29.784185+0000)
Feb 01 15:24:00 compute-0 ceph-osd[85969]: do_command 'log dump' '{prefix=log dump}'
Feb 01 15:24:00 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14634 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 01 15:24:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.eusbkm", "name": "rgw_frontends"} v 0)
Feb 01 15:24:00 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.eusbkm", "name": "rgw_frontends"} : dispatch
Feb 01 15:24:00 compute-0 ceph-mon[75179]: from='client.14628 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:24:00 compute-0 ceph-mon[75179]: from='client.14630 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 01 15:24:00 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.eusbkm", "name": "rgw_frontends"} : dispatch
Feb 01 15:24:00 compute-0 ceph-mon[75179]: from='client.14632 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:24:00 compute-0 ceph-mon[75179]: from='client.14634 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 01 15:24:00 compute-0 ceph-mon[75179]: from='mgr.14122 192.168.122.100:0/609003286' entity='mgr.compute-0.viosrg' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.eusbkm", "name": "rgw_frontends"} : dispatch
Feb 01 15:24:00 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14638 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 01 15:24:00 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:24:00 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14642 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 01 15:24:00 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Feb 01 15:24:00 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/661786201' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Feb 01 15:24:01 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14644 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 01 15:24:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:24:01 compute-0 ceph-mon[75179]: from='client.14638 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 01 15:24:01 compute-0 ceph-mon[75179]: pgmap v1178: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:24:01 compute-0 ceph-mon[75179]: from='client.14642 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 01 15:24:01 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/661786201' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Feb 01 15:24:01 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Feb 01 15:24:01 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1370637519' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Feb 01 15:24:02 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Feb 01 15:24:02 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/8827703' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Feb 01 15:24:02 compute-0 systemd[1]: Starting Hostname Service...
Feb 01 15:24:02 compute-0 nova_compute[238794]: 2026-02-01 15:24:02.319 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:24:02 compute-0 systemd[1]: Started Hostname Service.
Feb 01 15:24:02 compute-0 ceph-mon[75179]: from='client.14644 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 01 15:24:02 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1370637519' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Feb 01 15:24:02 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/8827703' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Feb 01 15:24:02 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Feb 01 15:24:02 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2694023519' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Feb 01 15:24:02 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 01 15:24:02 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 01 15:24:02 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:24:02 compute-0 nova_compute[238794]: 2026-02-01 15:24:02.601 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:24:02 compute-0 nova_compute[238794]: 2026-02-01 15:24:02.602 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:24:02 compute-0 nova_compute[238794]: 2026-02-01 15:24:02.602 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:24:02 compute-0 nova_compute[238794]: 2026-02-01 15:24:02.603 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 01 15:24:02 compute-0 nova_compute[238794]: 2026-02-01 15:24:02.603 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:24:02 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 01 15:24:02 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 01 15:24:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Feb 01 15:24:03 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/807038599' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Feb 01 15:24:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:24:03 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1577987460' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:24:03 compute-0 nova_compute[238794]: 2026-02-01 15:24:03.129 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:24:03 compute-0 nova_compute[238794]: 2026-02-01 15:24:03.252 238798 WARNING nova.virt.libvirt.driver [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 01 15:24:03 compute-0 nova_compute[238794]: 2026-02-01 15:24:03.253 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4730MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 01 15:24:03 compute-0 nova_compute[238794]: 2026-02-01 15:24:03.254 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 01 15:24:03 compute-0 nova_compute[238794]: 2026-02-01 15:24:03.254 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 01 15:24:03 compute-0 nova_compute[238794]: 2026-02-01 15:24:03.313 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 01 15:24:03 compute-0 nova_compute[238794]: 2026-02-01 15:24:03.314 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 01 15:24:03 compute-0 nova_compute[238794]: 2026-02-01 15:24:03.336 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 01 15:24:03 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/2694023519' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Feb 01 15:24:03 compute-0 ceph-mon[75179]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 01 15:24:03 compute-0 ceph-mon[75179]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 01 15:24:03 compute-0 ceph-mon[75179]: pgmap v1179: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:24:03 compute-0 ceph-mon[75179]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 01 15:24:03 compute-0 ceph-mon[75179]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 01 15:24:03 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/807038599' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Feb 01 15:24:03 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1577987460' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:24:03 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14664 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:24:03 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 01 15:24:03 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/675406841' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:24:03 compute-0 nova_compute[238794]: 2026-02-01 15:24:03.804 238798 DEBUG oslo_concurrency.processutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 01 15:24:03 compute-0 nova_compute[238794]: 2026-02-01 15:24:03.808 238798 DEBUG nova.compute.provider_tree [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 01 15:24:03 compute-0 nova_compute[238794]: 2026-02-01 15:24:03.838 238798 DEBUG nova.scheduler.client.report [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Inventory has not changed for provider 1aa3221d-258f-4b8d-a88f-2cbf3dfc8f18 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 01 15:24:03 compute-0 nova_compute[238794]: 2026-02-01 15:24:03.840 238798 DEBUG nova.compute.resource_tracker [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 01 15:24:03 compute-0 nova_compute[238794]: 2026-02-01 15:24:03.841 238798 DEBUG oslo_concurrency.lockutils [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 01 15:24:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Feb 01 15:24:04 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4228804113' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Feb 01 15:24:04 compute-0 ceph-mon[75179]: from='client.14664 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:24:04 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/675406841' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 01 15:24:04 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/4228804113' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Feb 01 15:24:04 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Feb 01 15:24:04 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1614821564' entity='client.admin' cmd={"prefix": "df"} : dispatch
Feb 01 15:24:04 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:24:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Feb 01 15:24:05 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3286978' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Feb 01 15:24:05 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/1614821564' entity='client.admin' cmd={"prefix": "df"} : dispatch
Feb 01 15:24:05 compute-0 ceph-mon[75179]: pgmap v1180: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
Feb 01 15:24:05 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3286978' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Feb 01 15:24:05 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Feb 01 15:24:05 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3329757717' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Feb 01 15:24:05 compute-0 nova_compute[238794]: 2026-02-01 15:24:05.842 238798 DEBUG oslo_service.periodic_task [None req-fc1c04e9-ecf3-4824-afc1-dbf7f340ac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 01 15:24:05 compute-0 ceph-mgr[75469]: log_channel(audit) log [DBG] : from='client.14676 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:24:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 01 15:24:06 compute-0 ceph-mon[75179]: from='client.? 192.168.122.100:0/3329757717' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Feb 01 15:24:06 compute-0 ceph-mon[75179]: from='client.14676 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Feb 01 15:24:06 compute-0 ceph-mon[75179]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Feb 01 15:24:06 compute-0 ceph-mon[75179]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/144293146' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Feb 01 15:24:06 compute-0 ceph-mgr[75469]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail
